2026-03-08 00:00:07.042759 | Job console starting 2026-03-08 00:00:07.070089 | Updating git repos 2026-03-08 00:00:07.486557 | Cloning repos into workspace 2026-03-08 00:00:07.794663 | Restoring repo states 2026-03-08 00:00:07.829746 | Merging changes 2026-03-08 00:00:07.829766 | Checking out repos 2026-03-08 00:00:08.317005 | Preparing playbooks 2026-03-08 00:00:09.746983 | Running Ansible setup 2026-03-08 00:00:17.304969 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-03-08 00:00:18.727583 | 2026-03-08 00:00:18.727700 | PLAY [Base pre] 2026-03-08 00:00:18.742775 | 2026-03-08 00:00:18.742895 | TASK [Setup log path fact] 2026-03-08 00:00:18.770606 | orchestrator | ok 2026-03-08 00:00:18.788451 | 2026-03-08 00:00:18.788567 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-03-08 00:00:18.820390 | orchestrator | ok 2026-03-08 00:00:18.829970 | 2026-03-08 00:00:18.830069 | TASK [emit-job-header : Print job information] 2026-03-08 00:00:18.899422 | # Job Information 2026-03-08 00:00:18.899566 | Ansible Version: 2.16.14 2026-03-08 00:00:18.899595 | Job: testbed-deploy-current-in-a-nutshell-with-tempest-ubuntu-24.04 2026-03-08 00:00:18.899623 | Pipeline: periodic-midnight 2026-03-08 00:00:18.899642 | Executor: 521e9411259a 2026-03-08 00:00:18.899659 | Triggered by: https://github.com/osism/testbed 2026-03-08 00:00:18.899677 | Event ID: 5a084b4300424527a1d97f8c219c9234 2026-03-08 00:00:18.905980 | 2026-03-08 00:00:18.909346 | LOOP [emit-job-header : Print node information] 2026-03-08 00:00:19.054495 | orchestrator | ok: 2026-03-08 00:00:19.054668 | orchestrator | # Node Information 2026-03-08 00:00:19.054698 | orchestrator | Inventory Hostname: orchestrator 2026-03-08 00:00:19.054719 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-03-08 00:00:19.054738 | orchestrator | Username: zuul-testbed06 2026-03-08 00:00:19.054756 | orchestrator | Distro: Debian 12.13 2026-03-08 00:00:19.054776 | orchestrator | Provider: static-testbed 2026-03-08 00:00:19.054793 | orchestrator | Region: 2026-03-08 00:00:19.054810 | orchestrator | Label: testbed-orchestrator 2026-03-08 00:00:19.054827 | orchestrator | Product Name: OpenStack Nova 2026-03-08 00:00:19.054894 | orchestrator | Interface IP: 81.163.193.140 2026-03-08 00:00:19.073751 | 2026-03-08 00:00:19.073850 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-03-08 00:00:20.014994 | orchestrator -> localhost | changed 2026-03-08 00:00:20.026322 | 2026-03-08 00:00:20.026418 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-03-08 00:00:22.473420 | orchestrator -> localhost | changed 2026-03-08 00:00:22.488419 | 2026-03-08 00:00:22.488512 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-03-08 00:00:23.067550 | orchestrator -> localhost | ok 2026-03-08 00:00:23.073354 | 2026-03-08 00:00:23.073445 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-03-08 00:00:23.110806 | orchestrator | ok 2026-03-08 00:00:23.134514 | orchestrator | included: /var/lib/zuul/builds/628392d9df5a4e3bac28b23c6f85c4d8/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-03-08 00:00:23.140806 | 2026-03-08 00:00:23.140889 | TASK [add-build-sshkey : Create Temp SSH key] 2026-03-08 00:00:25.220230 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-03-08 00:00:25.220649 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/628392d9df5a4e3bac28b23c6f85c4d8/work/628392d9df5a4e3bac28b23c6f85c4d8_id_rsa 2026-03-08 00:00:25.220771 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/628392d9df5a4e3bac28b23c6f85c4d8/work/628392d9df5a4e3bac28b23c6f85c4d8_id_rsa.pub 2026-03-08 00:00:25.220812 | orchestrator -> localhost | The key fingerprint is: 2026-03-08 00:00:25.221012 | orchestrator -> localhost | SHA256:TW4h+jXE7V5IS9HZ4SFlZh8I+TtPnX+QsV4RtJeUAPU zuul-build-sshkey 2026-03-08 00:00:25.221041 | orchestrator -> localhost | The key's randomart image is: 2026-03-08 00:00:25.221148 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-03-08 00:00:25.221180 | orchestrator -> localhost | | o*==&o| 2026-03-08 00:00:25.221208 | orchestrator -> localhost | | . o o@oB| 2026-03-08 00:00:25.221300 | orchestrator -> localhost | | . = = E+| 2026-03-08 00:00:25.221323 | orchestrator -> localhost | | . * = +...| 2026-03-08 00:00:25.221343 | orchestrator -> localhost | | . S * + o++| 2026-03-08 00:00:25.221436 | orchestrator -> localhost | | . o o ++oo| 2026-03-08 00:00:25.221461 | orchestrator -> localhost | | . ..+o.| 2026-03-08 00:00:25.221552 | orchestrator -> localhost | | ..o| 2026-03-08 00:00:25.221579 | orchestrator -> localhost | | .| 2026-03-08 00:00:25.221599 | orchestrator -> localhost | +----[SHA256]-----+ 2026-03-08 00:00:25.221714 | orchestrator -> localhost | ok: Runtime: 0:00:01.038587 2026-03-08 00:00:25.246786 | 2026-03-08 00:00:25.246933 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-03-08 00:00:25.356357 | orchestrator | ok 2026-03-08 00:00:25.376692 | orchestrator | included: /var/lib/zuul/builds/628392d9df5a4e3bac28b23c6f85c4d8/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-03-08 00:00:25.423774 | 2026-03-08 00:00:25.423873 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-03-08 00:00:25.511177 | orchestrator | skipping: Conditional result was False 2026-03-08 00:00:25.523825 | 2026-03-08 00:00:25.523914 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-03-08 00:00:26.484545 | orchestrator | changed 2026-03-08 00:00:26.496592 | 2026-03-08 00:00:26.496681 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-03-08 00:00:26.808311 | orchestrator | ok 2026-03-08 00:00:26.813298 | 2026-03-08 00:00:26.818955 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-03-08 00:00:27.352803 | orchestrator | ok 2026-03-08 00:00:27.358748 | 2026-03-08 00:00:27.358848 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-03-08 00:00:27.879857 | orchestrator | ok 2026-03-08 00:00:27.884786 | 2026-03-08 00:00:27.884861 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-03-08 00:00:27.923226 | orchestrator | skipping: Conditional result was False 2026-03-08 00:00:27.928634 | 2026-03-08 00:00:27.928716 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-03-08 00:00:29.520311 | orchestrator -> localhost | changed 2026-03-08 00:00:29.536992 | 2026-03-08 00:00:29.537088 | TASK [add-build-sshkey : Add back temp key] 2026-03-08 00:00:30.331592 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/628392d9df5a4e3bac28b23c6f85c4d8/work/628392d9df5a4e3bac28b23c6f85c4d8_id_rsa (zuul-build-sshkey) 2026-03-08 00:00:30.331770 | orchestrator -> localhost | ok: Runtime: 0:00:00.015347 2026-03-08 00:00:30.344455 | 2026-03-08 00:00:30.344608 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-03-08 00:00:30.953607 | orchestrator | ok 2026-03-08 00:00:30.959138 | 2026-03-08 00:00:30.959219 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-03-08 00:00:31.055487 | orchestrator | skipping: Conditional result was False 2026-03-08 00:00:31.107703 | 2026-03-08 00:00:31.107794 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-03-08 00:00:31.616804 | orchestrator | ok 2026-03-08 00:00:31.629727 | 2026-03-08 00:00:31.629819 | TASK [validate-host : Define zuul_info_dir fact] 2026-03-08 00:00:31.686934 | orchestrator | ok 2026-03-08 00:00:31.697719 | 2026-03-08 00:00:31.697808 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-03-08 00:00:32.721832 | orchestrator -> localhost | ok 2026-03-08 00:00:32.728111 | 2026-03-08 00:00:32.728194 | TASK [validate-host : Collect information about the host] 2026-03-08 00:00:34.365363 | orchestrator | ok 2026-03-08 00:00:34.409800 | 2026-03-08 00:00:34.409915 | TASK [validate-host : Sanitize hostname] 2026-03-08 00:00:34.561180 | orchestrator | ok 2026-03-08 00:00:34.565754 | 2026-03-08 00:00:34.565846 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-03-08 00:00:35.578271 | orchestrator -> localhost | changed 2026-03-08 00:00:35.583521 | 2026-03-08 00:00:35.583606 | TASK [validate-host : Collect information about zuul worker] 2026-03-08 00:00:36.289853 | orchestrator | ok 2026-03-08 00:00:36.294327 | 2026-03-08 00:00:36.294412 | TASK [validate-host : Write out all zuul information for each host] 2026-03-08 00:00:37.493603 | orchestrator -> localhost | changed 2026-03-08 00:00:37.502024 | 2026-03-08 00:00:37.502107 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-03-08 00:00:37.796788 | orchestrator | ok 2026-03-08 00:00:37.804651 | 2026-03-08 00:00:37.804734 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-03-08 00:01:58.229097 | orchestrator | changed: 2026-03-08 00:01:58.230752 | orchestrator | .d..t...... src/ 2026-03-08 00:01:58.230847 | orchestrator | .d..t...... src/github.com/ 2026-03-08 00:01:58.230878 | orchestrator | .d..t...... src/github.com/osism/ 2026-03-08 00:01:58.230902 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-03-08 00:01:58.230923 | orchestrator | RedHat.yml 2026-03-08 00:01:58.249261 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-03-08 00:01:58.249279 | orchestrator | RedHat.yml 2026-03-08 00:01:58.249363 | orchestrator | = 2.2.0"... 2026-03-08 00:02:09.099932 | orchestrator | - Finding latest version of hashicorp/null... 2026-03-08 00:02:09.114739 | orchestrator | - Finding terraform-provider-openstack/openstack versions matching ">= 1.53.0"... 2026-03-08 00:02:09.559261 | orchestrator | - Installing hashicorp/local v2.7.0... 2026-03-08 00:02:10.261159 | orchestrator | - Installed hashicorp/local v2.7.0 (signed, key ID 0C0AF313E5FD9F80) 2026-03-08 00:02:10.327809 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-03-08 00:02:10.849039 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-03-08 00:02:10.937637 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-03-08 00:02:11.637091 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-03-08 00:02:11.637152 | orchestrator | 2026-03-08 00:02:11.637160 | orchestrator | Providers are signed by their developers. 2026-03-08 00:02:11.637166 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-03-08 00:02:11.637171 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-03-08 00:02:11.637184 | orchestrator | 2026-03-08 00:02:11.637189 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-03-08 00:02:11.637204 | orchestrator | selections it made above. Include this file in your version control repository 2026-03-08 00:02:11.637208 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-03-08 00:02:11.637212 | orchestrator | you run "tofu init" in the future. 2026-03-08 00:02:11.637456 | orchestrator | 2026-03-08 00:02:11.637469 | orchestrator | OpenTofu has been successfully initialized! 2026-03-08 00:02:11.637478 | orchestrator | 2026-03-08 00:02:11.637485 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-03-08 00:02:11.637489 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-03-08 00:02:11.637493 | orchestrator | should now work. 2026-03-08 00:02:11.637497 | orchestrator | 2026-03-08 00:02:11.637501 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-03-08 00:02:11.637508 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-03-08 00:02:11.637512 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-03-08 00:02:11.797624 | orchestrator | Created and switched to workspace "ci"! 2026-03-08 00:02:11.797707 | orchestrator | 2026-03-08 00:02:11.797722 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-03-08 00:02:11.797733 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-03-08 00:02:11.797744 | orchestrator | for this configuration. 2026-03-08 00:02:11.914682 | orchestrator | ci.auto.tfvars 2026-03-08 00:02:12.128076 | orchestrator | default_custom.tf 2026-03-08 00:02:14.062083 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-03-08 00:02:14.657947 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-03-08 00:02:14.909311 | orchestrator | 2026-03-08 00:02:14.909381 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-03-08 00:02:14.909389 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-03-08 00:02:14.909395 | orchestrator | + create 2026-03-08 00:02:14.909401 | orchestrator | <= read (data resources) 2026-03-08 00:02:14.909406 | orchestrator | 2026-03-08 00:02:14.909412 | orchestrator | OpenTofu will perform the following actions: 2026-03-08 00:02:14.909426 | orchestrator | 2026-03-08 00:02:14.909431 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-03-08 00:02:14.909437 | orchestrator | # (config refers to values not yet known) 2026-03-08 00:02:14.909441 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-03-08 00:02:14.909446 | orchestrator | + checksum = (known after apply) 2026-03-08 00:02:14.909451 | orchestrator | + created_at = (known after apply) 2026-03-08 00:02:14.909456 | orchestrator | + file = (known after apply) 2026-03-08 00:02:14.909461 | orchestrator | + id = (known after apply) 2026-03-08 00:02:14.909485 | orchestrator | + metadata = (known after apply) 2026-03-08 00:02:14.909490 | orchestrator | + min_disk_gb = (known after apply) 2026-03-08 00:02:14.909495 | orchestrator | + min_ram_mb = (known after apply) 2026-03-08 00:02:14.909500 | orchestrator | + most_recent = true 2026-03-08 00:02:14.909505 | orchestrator | + name = (known after apply) 2026-03-08 00:02:14.909509 | orchestrator | + protected = (known after apply) 2026-03-08 00:02:14.909514 | orchestrator | + region = (known after apply) 2026-03-08 00:02:14.909522 | orchestrator | + schema = (known after apply) 2026-03-08 00:02:14.909526 | orchestrator | + size_bytes = (known after apply) 2026-03-08 00:02:14.909531 | orchestrator | + tags = (known after apply) 2026-03-08 00:02:14.909536 | orchestrator | + updated_at = (known after apply) 2026-03-08 00:02:14.909541 | orchestrator | } 2026-03-08 00:02:14.909547 | orchestrator | 2026-03-08 00:02:14.909569 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-03-08 00:02:14.909574 | orchestrator | # (config refers to values not yet known) 2026-03-08 00:02:14.909579 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-03-08 00:02:14.909584 | orchestrator | + checksum = (known after apply) 2026-03-08 00:02:14.909588 | orchestrator | + created_at = (known after apply) 2026-03-08 00:02:14.909593 | orchestrator | + file = (known after apply) 2026-03-08 00:02:14.909598 | orchestrator | + id = (known after apply) 2026-03-08 00:02:14.909602 | orchestrator | + metadata = (known after apply) 2026-03-08 00:02:14.909607 | orchestrator | + min_disk_gb = (known after apply) 2026-03-08 00:02:14.909612 | orchestrator | + min_ram_mb = (known after apply) 2026-03-08 00:02:14.909616 | orchestrator | + most_recent = true 2026-03-08 00:02:14.909621 | orchestrator | + name = (known after apply) 2026-03-08 00:02:14.909625 | orchestrator | + protected = (known after apply) 2026-03-08 00:02:14.909630 | orchestrator | + region = (known after apply) 2026-03-08 00:02:14.909635 | orchestrator | + schema = (known after apply) 2026-03-08 00:02:14.909639 | orchestrator | + size_bytes = (known after apply) 2026-03-08 00:02:14.909644 | orchestrator | + tags = (known after apply) 2026-03-08 00:02:14.909648 | orchestrator | + updated_at = (known after apply) 2026-03-08 00:02:14.909653 | orchestrator | } 2026-03-08 00:02:14.909699 | orchestrator | 2026-03-08 00:02:14.909705 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-03-08 00:02:14.909710 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-03-08 00:02:14.909714 | orchestrator | + content = (known after apply) 2026-03-08 00:02:14.909719 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-08 00:02:14.909724 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-08 00:02:14.909728 | orchestrator | + content_md5 = (known after apply) 2026-03-08 00:02:14.909733 | orchestrator | + content_sha1 = (known after apply) 2026-03-08 00:02:14.909737 | orchestrator | + content_sha256 = (known after apply) 2026-03-08 00:02:14.909742 | orchestrator | + content_sha512 = (known after apply) 2026-03-08 00:02:14.909747 | orchestrator | + directory_permission = "0777" 2026-03-08 00:02:14.909751 | orchestrator | + file_permission = "0644" 2026-03-08 00:02:14.909756 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-03-08 00:02:14.909761 | orchestrator | + id = (known after apply) 2026-03-08 00:02:14.909765 | orchestrator | } 2026-03-08 00:02:14.909770 | orchestrator | 2026-03-08 00:02:14.909774 | orchestrator | # local_file.id_rsa_pub will be created 2026-03-08 00:02:14.909779 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-03-08 00:02:14.909784 | orchestrator | + content = (known after apply) 2026-03-08 00:02:14.909788 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-08 00:02:14.909793 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-08 00:02:14.909797 | orchestrator | + content_md5 = (known after apply) 2026-03-08 00:02:14.909802 | orchestrator | + content_sha1 = (known after apply) 2026-03-08 00:02:14.909806 | orchestrator | + content_sha256 = (known after apply) 2026-03-08 00:02:14.909816 | orchestrator | + content_sha512 = (known after apply) 2026-03-08 00:02:14.909821 | orchestrator | + directory_permission = "0777" 2026-03-08 00:02:14.909825 | orchestrator | + file_permission = "0644" 2026-03-08 00:02:14.909835 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-03-08 00:02:14.909839 | orchestrator | + id = (known after apply) 2026-03-08 00:02:14.909844 | orchestrator | } 2026-03-08 00:02:14.909850 | orchestrator | 2026-03-08 00:02:14.909855 | orchestrator | # local_file.inventory will be created 2026-03-08 00:02:14.909860 | orchestrator | + resource "local_file" "inventory" { 2026-03-08 00:02:14.909864 | orchestrator | + content = (known after apply) 2026-03-08 00:02:14.909869 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-08 00:02:14.909873 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-08 00:02:14.909878 | orchestrator | + content_md5 = (known after apply) 2026-03-08 00:02:14.909882 | orchestrator | + content_sha1 = (known after apply) 2026-03-08 00:02:14.909887 | orchestrator | + content_sha256 = (known after apply) 2026-03-08 00:02:14.909892 | orchestrator | + content_sha512 = (known after apply) 2026-03-08 00:02:14.909897 | orchestrator | + directory_permission = "0777" 2026-03-08 00:02:14.909901 | orchestrator | + file_permission = "0644" 2026-03-08 00:02:14.909906 | orchestrator | + filename = "inventory.ci" 2026-03-08 00:02:14.909910 | orchestrator | + id = (known after apply) 2026-03-08 00:02:14.909915 | orchestrator | } 2026-03-08 00:02:14.909920 | orchestrator | 2026-03-08 00:02:14.909924 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-03-08 00:02:14.909929 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-03-08 00:02:14.909933 | orchestrator | + content = (sensitive value) 2026-03-08 00:02:14.909938 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-08 00:02:14.909942 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-08 00:02:14.909947 | orchestrator | + content_md5 = (known after apply) 2026-03-08 00:02:14.909951 | orchestrator | + content_sha1 = (known after apply) 2026-03-08 00:02:14.909956 | orchestrator | + content_sha256 = (known after apply) 2026-03-08 00:02:14.909961 | orchestrator | + content_sha512 = (known after apply) 2026-03-08 00:02:14.909965 | orchestrator | + directory_permission = "0700" 2026-03-08 00:02:14.909970 | orchestrator | + file_permission = "0600" 2026-03-08 00:02:14.909974 | orchestrator | + filename = ".id_rsa.ci" 2026-03-08 00:02:14.909979 | orchestrator | + id = (known after apply) 2026-03-08 00:02:14.909984 | orchestrator | } 2026-03-08 00:02:14.909988 | orchestrator | 2026-03-08 00:02:14.909993 | orchestrator | # null_resource.node_semaphore will be created 2026-03-08 00:02:14.909997 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-03-08 00:02:14.910002 | orchestrator | + id = (known after apply) 2026-03-08 00:02:14.910006 | orchestrator | } 2026-03-08 00:02:14.910013 | orchestrator | 2026-03-08 00:02:14.910040 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-03-08 00:02:14.910045 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-03-08 00:02:14.910050 | orchestrator | + attachment = (known after apply) 2026-03-08 00:02:14.910054 | orchestrator | + availability_zone = "nova" 2026-03-08 00:02:14.910059 | orchestrator | + id = (known after apply) 2026-03-08 00:02:14.910063 | orchestrator | + image_id = (known after apply) 2026-03-08 00:02:14.910068 | orchestrator | + metadata = (known after apply) 2026-03-08 00:02:14.910073 | orchestrator | + name = "testbed-volume-manager-base" 2026-03-08 00:02:14.910082 | orchestrator | + region = (known after apply) 2026-03-08 00:02:14.910087 | orchestrator | + size = 80 2026-03-08 00:02:14.910091 | orchestrator | + volume_retype_policy = "never" 2026-03-08 00:02:14.910096 | orchestrator | + volume_type = "ssd" 2026-03-08 00:02:14.910100 | orchestrator | } 2026-03-08 00:02:14.910105 | orchestrator | 2026-03-08 00:02:14.910110 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-03-08 00:02:14.910114 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-08 00:02:14.910119 | orchestrator | + attachment = (known after apply) 2026-03-08 00:02:14.910123 | orchestrator | + availability_zone = "nova" 2026-03-08 00:02:14.910128 | orchestrator | + id = (known after apply) 2026-03-08 00:02:14.910137 | orchestrator | + image_id = (known after apply) 2026-03-08 00:02:14.910141 | orchestrator | + metadata = (known after apply) 2026-03-08 00:02:14.910146 | orchestrator | + name = "testbed-volume-0-node-base" 2026-03-08 00:02:14.910151 | orchestrator | + region = (known after apply) 2026-03-08 00:02:14.910155 | orchestrator | + size = 80 2026-03-08 00:02:14.910160 | orchestrator | + volume_retype_policy = "never" 2026-03-08 00:02:14.910164 | orchestrator | + volume_type = "ssd" 2026-03-08 00:02:14.910169 | orchestrator | } 2026-03-08 00:02:14.912122 | orchestrator | 2026-03-08 00:02:14.912145 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-03-08 00:02:14.912151 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-08 00:02:14.912157 | orchestrator | + attachment = (known after apply) 2026-03-08 00:02:14.912162 | orchestrator | + availability_zone = "nova" 2026-03-08 00:02:14.912167 | orchestrator | + id = (known after apply) 2026-03-08 00:02:14.912172 | orchestrator | + image_id = (known after apply) 2026-03-08 00:02:14.912176 | orchestrator | + metadata = (known after apply) 2026-03-08 00:02:14.912181 | orchestrator | + name = "testbed-volume-1-node-base" 2026-03-08 00:02:14.912185 | orchestrator | + region = (known after apply) 2026-03-08 00:02:14.912190 | orchestrator | + size = 80 2026-03-08 00:02:14.912195 | orchestrator | + volume_retype_policy = "never" 2026-03-08 00:02:14.912199 | orchestrator | + volume_type = "ssd" 2026-03-08 00:02:14.912204 | orchestrator | } 2026-03-08 00:02:14.912208 | orchestrator | 2026-03-08 00:02:14.912213 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-03-08 00:02:14.912218 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-08 00:02:14.912222 | orchestrator | + attachment = (known after apply) 2026-03-08 00:02:14.912227 | orchestrator | + availability_zone = "nova" 2026-03-08 00:02:14.912232 | orchestrator | + id = (known after apply) 2026-03-08 00:02:14.912236 | orchestrator | + image_id = (known after apply) 2026-03-08 00:02:14.912241 | orchestrator | + metadata = (known after apply) 2026-03-08 00:02:14.912245 | orchestrator | + name = "testbed-volume-2-node-base" 2026-03-08 00:02:14.912250 | orchestrator | + region = (known after apply) 2026-03-08 00:02:14.912255 | orchestrator | + size = 80 2026-03-08 00:02:14.912266 | orchestrator | + volume_retype_policy = "never" 2026-03-08 00:02:14.912271 | orchestrator | + volume_type = "ssd" 2026-03-08 00:02:14.912275 | orchestrator | } 2026-03-08 00:02:14.912280 | orchestrator | 2026-03-08 00:02:14.912284 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-03-08 00:02:14.912289 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-08 00:02:14.912293 | orchestrator | + attachment = (known after apply) 2026-03-08 00:02:14.912298 | orchestrator | + availability_zone = "nova" 2026-03-08 00:02:14.912303 | orchestrator | + id = (known after apply) 2026-03-08 00:02:14.912307 | orchestrator | + image_id = (known after apply) 2026-03-08 00:02:14.912312 | orchestrator | + metadata = (known after apply) 2026-03-08 00:02:14.912316 | orchestrator | + name = "testbed-volume-3-node-base" 2026-03-08 00:02:14.912321 | orchestrator | + region = (known after apply) 2026-03-08 00:02:14.912325 | orchestrator | + size = 80 2026-03-08 00:02:14.912330 | orchestrator | + volume_retype_policy = "never" 2026-03-08 00:02:14.912335 | orchestrator | + volume_type = "ssd" 2026-03-08 00:02:14.912339 | orchestrator | } 2026-03-08 00:02:14.912344 | orchestrator | 2026-03-08 00:02:14.912348 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-03-08 00:02:14.912353 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-08 00:02:14.912358 | orchestrator | + attachment = (known after apply) 2026-03-08 00:02:14.912362 | orchestrator | + availability_zone = "nova" 2026-03-08 00:02:14.912367 | orchestrator | + id = (known after apply) 2026-03-08 00:02:14.912378 | orchestrator | + image_id = (known after apply) 2026-03-08 00:02:14.912383 | orchestrator | + metadata = (known after apply) 2026-03-08 00:02:14.912387 | orchestrator | + name = "testbed-volume-4-node-base" 2026-03-08 00:02:14.912392 | orchestrator | + region = (known after apply) 2026-03-08 00:02:14.912397 | orchestrator | + size = 80 2026-03-08 00:02:14.912401 | orchestrator | + volume_retype_policy = "never" 2026-03-08 00:02:14.912406 | orchestrator | + volume_type = "ssd" 2026-03-08 00:02:14.912411 | orchestrator | } 2026-03-08 00:02:14.912415 | orchestrator | 2026-03-08 00:02:14.912420 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-03-08 00:02:14.912424 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-08 00:02:14.912429 | orchestrator | + attachment = (known after apply) 2026-03-08 00:02:14.912433 | orchestrator | + availability_zone = "nova" 2026-03-08 00:02:14.912438 | orchestrator | + id = (known after apply) 2026-03-08 00:02:14.912442 | orchestrator | + image_id = (known after apply) 2026-03-08 00:02:14.912447 | orchestrator | + metadata = (known after apply) 2026-03-08 00:02:14.912451 | orchestrator | + name = "testbed-volume-5-node-base" 2026-03-08 00:02:14.912456 | orchestrator | + region = (known after apply) 2026-03-08 00:02:14.912461 | orchestrator | + size = 80 2026-03-08 00:02:14.912465 | orchestrator | + volume_retype_policy = "never" 2026-03-08 00:02:14.912470 | orchestrator | + volume_type = "ssd" 2026-03-08 00:02:14.912474 | orchestrator | } 2026-03-08 00:02:14.912479 | orchestrator | 2026-03-08 00:02:14.912483 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-03-08 00:02:14.912489 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-08 00:02:14.912494 | orchestrator | + attachment = (known after apply) 2026-03-08 00:02:14.912498 | orchestrator | + availability_zone = "nova" 2026-03-08 00:02:14.912503 | orchestrator | + id = (known after apply) 2026-03-08 00:02:14.912507 | orchestrator | + metadata = (known after apply) 2026-03-08 00:02:14.912512 | orchestrator | + name = "testbed-volume-0-node-3" 2026-03-08 00:02:14.912517 | orchestrator | + region = (known after apply) 2026-03-08 00:02:14.912521 | orchestrator | + size = 20 2026-03-08 00:02:14.912526 | orchestrator | + volume_retype_policy = "never" 2026-03-08 00:02:14.912531 | orchestrator | + volume_type = "ssd" 2026-03-08 00:02:14.912536 | orchestrator | } 2026-03-08 00:02:14.912540 | orchestrator | 2026-03-08 00:02:14.912545 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-03-08 00:02:14.912550 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-08 00:02:14.912555 | orchestrator | + attachment = (known after apply) 2026-03-08 00:02:14.912560 | orchestrator | + availability_zone = "nova" 2026-03-08 00:02:14.912564 | orchestrator | + id = (known after apply) 2026-03-08 00:02:14.912569 | orchestrator | + metadata = (known after apply) 2026-03-08 00:02:14.912573 | orchestrator | + name = "testbed-volume-1-node-4" 2026-03-08 00:02:14.912578 | orchestrator | + region = (known after apply) 2026-03-08 00:02:14.912583 | orchestrator | + size = 20 2026-03-08 00:02:14.912593 | orchestrator | + volume_retype_policy = "never" 2026-03-08 00:02:14.912598 | orchestrator | + volume_type = "ssd" 2026-03-08 00:02:14.912603 | orchestrator | } 2026-03-08 00:02:14.912608 | orchestrator | 2026-03-08 00:02:14.912612 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-03-08 00:02:14.912617 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-08 00:02:14.912621 | orchestrator | + attachment = (known after apply) 2026-03-08 00:02:14.912626 | orchestrator | + availability_zone = "nova" 2026-03-08 00:02:14.912631 | orchestrator | + id = (known after apply) 2026-03-08 00:02:14.912635 | orchestrator | + metadata = (known after apply) 2026-03-08 00:02:14.912640 | orchestrator | + name = "testbed-volume-2-node-5" 2026-03-08 00:02:14.912644 | orchestrator | + region = (known after apply) 2026-03-08 00:02:14.912652 | orchestrator | + size = 20 2026-03-08 00:02:14.912674 | orchestrator | + volume_retype_policy = "never" 2026-03-08 00:02:14.912679 | orchestrator | + volume_type = "ssd" 2026-03-08 00:02:14.912684 | orchestrator | } 2026-03-08 00:02:14.912688 | orchestrator | 2026-03-08 00:02:14.912693 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-03-08 00:02:14.912697 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-08 00:02:14.912702 | orchestrator | + attachment = (known after apply) 2026-03-08 00:02:14.912706 | orchestrator | + availability_zone = "nova" 2026-03-08 00:02:14.912711 | orchestrator | + id = (known after apply) 2026-03-08 00:02:14.912719 | orchestrator | + metadata = (known after apply) 2026-03-08 00:02:14.912723 | orchestrator | + name = "testbed-volume-3-node-3" 2026-03-08 00:02:14.912728 | orchestrator | + region = (known after apply) 2026-03-08 00:02:14.912733 | orchestrator | + size = 20 2026-03-08 00:02:14.912737 | orchestrator | + volume_retype_policy = "never" 2026-03-08 00:02:14.912742 | orchestrator | + volume_type = "ssd" 2026-03-08 00:02:14.912746 | orchestrator | } 2026-03-08 00:02:14.912751 | orchestrator | 2026-03-08 00:02:14.912756 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-03-08 00:02:14.912760 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-08 00:02:14.912765 | orchestrator | + attachment = (known after apply) 2026-03-08 00:02:14.912769 | orchestrator | + availability_zone = "nova" 2026-03-08 00:02:14.912774 | orchestrator | + id = (known after apply) 2026-03-08 00:02:14.912778 | orchestrator | + metadata = (known after apply) 2026-03-08 00:02:14.912783 | orchestrator | + name = "testbed-volume-4-node-4" 2026-03-08 00:02:14.912787 | orchestrator | + region = (known after apply) 2026-03-08 00:02:14.912792 | orchestrator | + size = 20 2026-03-08 00:02:14.912797 | orchestrator | + volume_retype_policy = "never" 2026-03-08 00:02:14.912801 | orchestrator | + volume_type = "ssd" 2026-03-08 00:02:14.912806 | orchestrator | } 2026-03-08 00:02:14.912810 | orchestrator | 2026-03-08 00:02:14.912815 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-03-08 00:02:14.912819 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-08 00:02:14.912824 | orchestrator | + attachment = (known after apply) 2026-03-08 00:02:14.912828 | orchestrator | + availability_zone = "nova" 2026-03-08 00:02:14.912833 | orchestrator | + id = (known after apply) 2026-03-08 00:02:14.912837 | orchestrator | + metadata = (known after apply) 2026-03-08 00:02:14.912842 | orchestrator | + name = "testbed-volume-5-node-5" 2026-03-08 00:02:14.912846 | orchestrator | + region = (known after apply) 2026-03-08 00:02:14.912851 | orchestrator | + size = 20 2026-03-08 00:02:14.912855 | orchestrator | + volume_retype_policy = "never" 2026-03-08 00:02:14.912860 | orchestrator | + volume_type = "ssd" 2026-03-08 00:02:14.912864 | orchestrator | } 2026-03-08 00:02:14.912869 | orchestrator | 2026-03-08 00:02:14.912873 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-03-08 00:02:14.912878 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-08 00:02:14.912882 | orchestrator | + attachment = (known after apply) 2026-03-08 00:02:14.912887 | orchestrator | + availability_zone = "nova" 2026-03-08 00:02:14.912892 | orchestrator | + id = (known after apply) 2026-03-08 00:02:14.912896 | orchestrator | + metadata = (known after apply) 2026-03-08 00:02:14.912901 | orchestrator | + name = "testbed-volume-6-node-3" 2026-03-08 00:02:14.912905 | orchestrator | + region = (known after apply) 2026-03-08 00:02:14.912910 | orchestrator | + size = 20 2026-03-08 00:02:14.912914 | orchestrator | + volume_retype_policy = "never" 2026-03-08 00:02:14.912919 | orchestrator | + volume_type = "ssd" 2026-03-08 00:02:14.912923 | orchestrator | } 2026-03-08 00:02:14.912928 | orchestrator | 2026-03-08 00:02:14.912932 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-03-08 00:02:14.912937 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-08 00:02:14.912945 | orchestrator | + attachment = (known after apply) 2026-03-08 00:02:14.912949 | orchestrator | + availability_zone = "nova" 2026-03-08 00:02:14.912954 | orchestrator | + id = (known after apply) 2026-03-08 00:02:14.912959 | orchestrator | + metadata = (known after apply) 2026-03-08 00:02:14.912963 | orchestrator | + name = "testbed-volume-7-node-4" 2026-03-08 00:02:14.912968 | orchestrator | + region = (known after apply) 2026-03-08 00:02:14.912972 | orchestrator | + size = 20 2026-03-08 00:02:14.912977 | orchestrator | + volume_retype_policy = "never" 2026-03-08 00:02:14.912981 | orchestrator | + volume_type = "ssd" 2026-03-08 00:02:14.912986 | orchestrator | } 2026-03-08 00:02:14.912990 | orchestrator | 2026-03-08 00:02:14.912995 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-03-08 00:02:14.913000 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-08 00:02:14.913004 | orchestrator | + attachment = (known after apply) 2026-03-08 00:02:14.913009 | orchestrator | + availability_zone = "nova" 2026-03-08 00:02:14.913013 | orchestrator | + id = (known after apply) 2026-03-08 00:02:14.913017 | orchestrator | + metadata = (known after apply) 2026-03-08 00:02:14.913022 | orchestrator | + name = "testbed-volume-8-node-5" 2026-03-08 00:02:14.913027 | orchestrator | + region = (known after apply) 2026-03-08 00:02:14.913031 | orchestrator | + size = 20 2026-03-08 00:02:14.913043 | orchestrator | + volume_retype_policy = "never" 2026-03-08 00:02:14.913048 | orchestrator | + volume_type = "ssd" 2026-03-08 00:02:14.913053 | orchestrator | } 2026-03-08 00:02:14.913057 | orchestrator | 2026-03-08 00:02:14.913062 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-03-08 00:02:14.913073 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-03-08 00:02:14.913078 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-08 00:02:14.913085 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-08 00:02:14.913090 | orchestrator | + all_metadata = (known after apply) 2026-03-08 00:02:14.913095 | orchestrator | + all_tags = (known after apply) 2026-03-08 00:02:14.913100 | orchestrator | + availability_zone = "nova" 2026-03-08 00:02:14.913105 | orchestrator | + config_drive = true 2026-03-08 00:02:14.913113 | orchestrator | + created = (known after apply) 2026-03-08 00:02:14.913118 | orchestrator | + flavor_id = (known after apply) 2026-03-08 00:02:14.913122 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-03-08 00:02:14.913127 | orchestrator | + force_delete = false 2026-03-08 00:02:14.913131 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-08 00:02:14.913136 | orchestrator | + id = (known after apply) 2026-03-08 00:02:14.913140 | orchestrator | + image_id = (known after apply) 2026-03-08 00:02:14.913145 | orchestrator | + image_name = (known after apply) 2026-03-08 00:02:14.913149 | orchestrator | + key_pair = "testbed" 2026-03-08 00:02:14.913154 | orchestrator | + name = "testbed-manager" 2026-03-08 00:02:14.913158 | orchestrator | + power_state = "active" 2026-03-08 00:02:14.913163 | orchestrator | + region = (known after apply) 2026-03-08 00:02:14.913167 | orchestrator | + security_groups = (known after apply) 2026-03-08 00:02:14.913172 | orchestrator | + stop_before_destroy = false 2026-03-08 00:02:14.913176 | orchestrator | + updated = (known after apply) 2026-03-08 00:02:14.913181 | orchestrator | + user_data = (sensitive value) 2026-03-08 00:02:14.913185 | orchestrator | 2026-03-08 00:02:14.913190 | orchestrator | + block_device { 2026-03-08 00:02:14.913195 | orchestrator | + boot_index = 0 2026-03-08 00:02:14.913199 | orchestrator | + delete_on_termination = false 2026-03-08 00:02:14.913204 | orchestrator | + destination_type = "volume" 2026-03-08 00:02:14.913208 | orchestrator | + multiattach = false 2026-03-08 00:02:14.913213 | orchestrator | + source_type = "volume" 2026-03-08 00:02:14.913217 | orchestrator | + uuid = (known after apply) 2026-03-08 00:02:14.913225 | orchestrator | } 2026-03-08 00:02:14.913230 | orchestrator | 2026-03-08 00:02:14.913235 | orchestrator | + network { 2026-03-08 00:02:14.913239 | orchestrator | + access_network = false 2026-03-08 00:02:14.913244 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-08 00:02:14.913248 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-08 00:02:14.913253 | orchestrator | + mac = (known after apply) 2026-03-08 00:02:14.913257 | orchestrator | + name = (known after apply) 2026-03-08 00:02:14.913262 | orchestrator | + port = (known after apply) 2026-03-08 00:02:14.913266 | orchestrator | + uuid = (known after apply) 2026-03-08 00:02:14.913271 | orchestrator | } 2026-03-08 00:02:14.913275 | orchestrator | } 2026-03-08 00:02:14.913280 | orchestrator | 2026-03-08 00:02:14.913284 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-03-08 00:02:14.913289 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-08 00:02:14.913294 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-08 00:02:14.913298 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-08 00:02:14.913303 | orchestrator | + all_metadata = (known after apply) 2026-03-08 00:02:14.913307 | orchestrator | + all_tags = (known after apply) 2026-03-08 00:02:14.913312 | orchestrator | + availability_zone = "nova" 2026-03-08 00:02:14.913316 | orchestrator | + config_drive = true 2026-03-08 00:02:14.913321 | orchestrator | + created = (known after apply) 2026-03-08 00:02:14.913325 | orchestrator | + flavor_id = (known after apply) 2026-03-08 00:02:14.913330 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-08 00:02:14.913334 | orchestrator | + force_delete = false 2026-03-08 00:02:14.913339 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-08 00:02:14.913343 | orchestrator | + id = (known after apply) 2026-03-08 00:02:14.913348 | orchestrator | + image_id = (known after apply) 2026-03-08 00:02:14.913352 | orchestrator | + image_name = (known after apply) 2026-03-08 00:02:14.913357 | orchestrator | + key_pair = "testbed" 2026-03-08 00:02:14.913361 | orchestrator | + name = "testbed-node-0" 2026-03-08 00:02:14.913366 | orchestrator | + power_state = "active" 2026-03-08 00:02:14.913370 | orchestrator | + region = (known after apply) 2026-03-08 00:02:14.913375 | orchestrator | + security_groups = (known after apply) 2026-03-08 00:02:14.913379 | orchestrator | + stop_before_destroy = false 2026-03-08 00:02:14.913384 | orchestrator | + updated = (known after apply) 2026-03-08 00:02:14.913388 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-08 00:02:14.913393 | orchestrator | 2026-03-08 00:02:14.913398 | orchestrator | + block_device { 2026-03-08 00:02:14.913402 | orchestrator | + boot_index = 0 2026-03-08 00:02:14.913407 | orchestrator | + delete_on_termination = false 2026-03-08 00:02:14.913411 | orchestrator | + destination_type = "volume" 2026-03-08 00:02:14.913416 | orchestrator | + multiattach = false 2026-03-08 00:02:14.913420 | orchestrator | + source_type = "volume" 2026-03-08 00:02:14.913425 | orchestrator | + uuid = (known after apply) 2026-03-08 00:02:14.913429 | orchestrator | } 2026-03-08 00:02:14.913434 | orchestrator | 2026-03-08 00:02:14.913438 | orchestrator | + network { 2026-03-08 00:02:14.913443 | orchestrator | + access_network = false 2026-03-08 00:02:14.913447 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-08 00:02:14.913452 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-08 00:02:14.913457 | orchestrator | + mac = (known after apply) 2026-03-08 00:02:14.913461 | orchestrator | + name = (known after apply) 2026-03-08 00:02:14.913466 | orchestrator | + port = (known after apply) 2026-03-08 00:02:14.913470 | orchestrator | + uuid = (known after apply) 2026-03-08 00:02:14.913475 | orchestrator | } 2026-03-08 00:02:14.913479 | orchestrator | } 2026-03-08 00:02:14.913484 | orchestrator | 2026-03-08 00:02:14.913488 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-03-08 00:02:14.913493 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-08 00:02:14.913498 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-08 00:02:14.913506 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-08 00:02:14.913510 | orchestrator | + all_metadata = (known after apply) 2026-03-08 00:02:14.913515 | orchestrator | + all_tags = (known after apply) 2026-03-08 00:02:14.913519 | orchestrator | + availability_zone = "nova" 2026-03-08 00:02:14.913524 | orchestrator | + config_drive = true 2026-03-08 00:02:14.913528 | orchestrator | + created = (known after apply) 2026-03-08 00:02:14.913533 | orchestrator | + flavor_id = (known after apply) 2026-03-08 00:02:14.913537 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-08 00:02:14.913542 | orchestrator | + force_delete = false 2026-03-08 00:02:14.913546 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-08 00:02:14.913553 | orchestrator | + id = (known after apply) 2026-03-08 00:02:14.913558 | orchestrator | + image_id = (known after apply) 2026-03-08 00:02:14.913563 | orchestrator | + image_name = (known after apply) 2026-03-08 00:02:14.913567 | orchestrator | + key_pair = "testbed" 2026-03-08 00:02:14.913572 | orchestrator | + name = "testbed-node-1" 2026-03-08 00:02:14.913576 | orchestrator | + power_state = "active" 2026-03-08 00:02:14.913581 | orchestrator | + region = (known after apply) 2026-03-08 00:02:14.913585 | orchestrator | + security_groups = (known after apply) 2026-03-08 00:02:14.913590 | orchestrator | + stop_before_destroy = false 2026-03-08 00:02:14.913594 | orchestrator | + updated = (known after apply) 2026-03-08 00:02:14.913601 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-08 00:02:14.913606 | orchestrator | 2026-03-08 00:02:14.913610 | orchestrator | + block_device { 2026-03-08 00:02:14.913615 | orchestrator | + boot_index = 0 2026-03-08 00:02:14.913619 | orchestrator | + delete_on_termination = false 2026-03-08 00:02:14.913624 | orchestrator | + destination_type = "volume" 2026-03-08 00:02:14.913629 | orchestrator | + multiattach = false 2026-03-08 00:02:14.913634 | orchestrator | + source_type = "volume" 2026-03-08 00:02:14.913639 | orchestrator | + uuid = (known after apply) 2026-03-08 00:02:14.913644 | orchestrator | } 2026-03-08 00:02:14.913648 | orchestrator | 2026-03-08 00:02:14.913653 | orchestrator | + network { 2026-03-08 00:02:14.913674 | orchestrator | + access_network = false 2026-03-08 00:02:14.913679 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-08 00:02:14.913684 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-08 00:02:14.913688 | orchestrator | + mac = (known after apply) 2026-03-08 00:02:14.913693 | orchestrator | + name = (known after apply) 2026-03-08 00:02:14.913697 | orchestrator | + port = (known after apply) 2026-03-08 00:02:14.913702 | orchestrator | + uuid = (known after apply) 2026-03-08 00:02:14.913706 | orchestrator | } 2026-03-08 00:02:14.913711 | orchestrator | } 2026-03-08 00:02:14.913715 | orchestrator | 2026-03-08 00:02:14.913720 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-03-08 00:02:14.913725 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-08 00:02:14.913729 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-08 00:02:14.913734 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-08 00:02:14.913738 | orchestrator | + all_metadata = (known after apply) 2026-03-08 00:02:14.913743 | orchestrator | + all_tags = (known after apply) 2026-03-08 00:02:14.913747 | orchestrator | + availability_zone = "nova" 2026-03-08 00:02:14.913752 | orchestrator | + config_drive = true 2026-03-08 00:02:14.913756 | orchestrator | + created = (known after apply) 2026-03-08 00:02:14.913761 | orchestrator | + flavor_id = (known after apply) 2026-03-08 00:02:14.913765 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-08 00:02:14.913770 | orchestrator | + force_delete = false 2026-03-08 00:02:14.913774 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-08 00:02:14.913779 | orchestrator | + id = (known after apply) 2026-03-08 00:02:14.913783 | orchestrator | + image_id = (known after apply) 2026-03-08 00:02:14.913792 | orchestrator | + image_name = (known after apply) 2026-03-08 00:02:14.913796 | orchestrator | + key_pair = "testbed" 2026-03-08 00:02:14.913801 | orchestrator | + name = "testbed-node-2" 2026-03-08 00:02:14.913805 | orchestrator | + power_state = "active" 2026-03-08 00:02:14.913810 | orchestrator | + region = (known after apply) 2026-03-08 00:02:14.913814 | orchestrator | + security_groups = (known after apply) 2026-03-08 00:02:14.913819 | orchestrator | + stop_before_destroy = false 2026-03-08 00:02:14.913823 | orchestrator | + updated = (known after apply) 2026-03-08 00:02:14.913828 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-08 00:02:14.913832 | orchestrator | 2026-03-08 00:02:14.913837 | orchestrator | + block_device { 2026-03-08 00:02:14.913841 | orchestrator | + boot_index = 0 2026-03-08 00:02:14.913846 | orchestrator | + delete_on_termination = false 2026-03-08 00:02:14.913851 | orchestrator | + destination_type = "volume" 2026-03-08 00:02:14.913855 | orchestrator | + multiattach = false 2026-03-08 00:02:14.913859 | orchestrator | + source_type = "volume" 2026-03-08 00:02:14.913864 | orchestrator | + uuid = (known after apply) 2026-03-08 00:02:14.913869 | orchestrator | } 2026-03-08 00:02:14.913873 | orchestrator | 2026-03-08 00:02:14.913878 | orchestrator | + network { 2026-03-08 00:02:14.913882 | orchestrator | + access_network = false 2026-03-08 00:02:14.913887 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-08 00:02:14.913891 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-08 00:02:14.913896 | orchestrator | + mac = (known after apply) 2026-03-08 00:02:14.913900 | orchestrator | + name = (known after apply) 2026-03-08 00:02:14.913905 | orchestrator | + port = (known after apply) 2026-03-08 00:02:14.913909 | orchestrator | + uuid = (known after apply) 2026-03-08 00:02:14.913914 | orchestrator | } 2026-03-08 00:02:14.913918 | orchestrator | } 2026-03-08 00:02:14.913923 | orchestrator | 2026-03-08 00:02:14.913933 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-03-08 00:02:14.913938 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-08 00:02:14.913943 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-08 00:02:14.913947 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-08 00:02:14.913952 | orchestrator | + all_metadata = (known after apply) 2026-03-08 00:02:14.913956 | orchestrator | + all_tags = (known after apply) 2026-03-08 00:02:14.913961 | orchestrator | + availability_zone = "nova" 2026-03-08 00:02:14.913965 | orchestrator | + config_drive = true 2026-03-08 00:02:14.913970 | orchestrator | + created = (known after apply) 2026-03-08 00:02:14.913974 | orchestrator | + flavor_id = (known after apply) 2026-03-08 00:02:14.913979 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-08 00:02:14.913983 | orchestrator | + force_delete = false 2026-03-08 00:02:14.913988 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-08 00:02:14.913992 | orchestrator | + id = (known after apply) 2026-03-08 00:02:14.913997 | orchestrator | + image_id = (known after apply) 2026-03-08 00:02:14.914001 | orchestrator | + image_name = (known after apply) 2026-03-08 00:02:14.914006 | orchestrator | + key_pair = "testbed" 2026-03-08 00:02:14.914010 | orchestrator | + name = "testbed-node-3" 2026-03-08 00:02:14.914068 | orchestrator | + power_state = "active" 2026-03-08 00:02:14.914073 | orchestrator | + region = (known after apply) 2026-03-08 00:02:14.914078 | orchestrator | + security_groups = (known after apply) 2026-03-08 00:02:14.914083 | orchestrator | + stop_before_destroy = false 2026-03-08 00:02:14.914087 | orchestrator | + updated = (known after apply) 2026-03-08 00:02:14.914095 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-08 00:02:14.914100 | orchestrator | 2026-03-08 00:02:14.914105 | orchestrator | + block_device { 2026-03-08 00:02:14.914109 | orchestrator | + boot_index = 0 2026-03-08 00:02:14.914114 | orchestrator | + delete_on_termination = false 2026-03-08 00:02:14.914118 | orchestrator | + destination_type = "volume" 2026-03-08 00:02:14.914128 | orchestrator | + multiattach = false 2026-03-08 00:02:14.914133 | orchestrator | + source_type = "volume" 2026-03-08 00:02:14.914138 | orchestrator | + uuid = (known after apply) 2026-03-08 00:02:14.914142 | orchestrator | } 2026-03-08 00:02:14.914147 | orchestrator | 2026-03-08 00:02:14.914151 | orchestrator | + network { 2026-03-08 00:02:14.914156 | orchestrator | + access_network = false 2026-03-08 00:02:14.914160 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-08 00:02:14.914165 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-08 00:02:14.914169 | orchestrator | + mac = (known after apply) 2026-03-08 00:02:14.914174 | orchestrator | + name = (known after apply) 2026-03-08 00:02:14.914178 | orchestrator | + port = (known after apply) 2026-03-08 00:02:14.914183 | orchestrator | + uuid = (known after apply) 2026-03-08 00:02:14.914188 | orchestrator | } 2026-03-08 00:02:14.914192 | orchestrator | } 2026-03-08 00:02:14.914197 | orchestrator | 2026-03-08 00:02:14.914202 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-03-08 00:02:14.914206 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-08 00:02:14.914211 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-08 00:02:14.914216 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-08 00:02:14.914220 | orchestrator | + all_metadata = (known after apply) 2026-03-08 00:02:14.914225 | orchestrator | + all_tags = (known after apply) 2026-03-08 00:02:14.914298 | orchestrator | + availability_zone = "nova" 2026-03-08 00:02:14.914303 | orchestrator | + config_drive = true 2026-03-08 00:02:14.914308 | orchestrator | + created = (known after apply) 2026-03-08 00:02:14.914313 | orchestrator | + flavor_id = (known after apply) 2026-03-08 00:02:14.914317 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-08 00:02:14.914322 | orchestrator | + force_delete = false 2026-03-08 00:02:14.914326 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-08 00:02:14.914331 | orchestrator | + id = (known after apply) 2026-03-08 00:02:14.914335 | orchestrator | + image_id = (known after apply) 2026-03-08 00:02:14.914340 | orchestrator | + image_name = (known after apply) 2026-03-08 00:02:14.914344 | orchestrator | + key_pair = "testbed" 2026-03-08 00:02:14.914349 | orchestrator | + name = "testbed-node-4" 2026-03-08 00:02:14.914353 | orchestrator | + power_state = "active" 2026-03-08 00:02:14.914358 | orchestrator | + region = (known after apply) 2026-03-08 00:02:14.914362 | orchestrator | + security_groups = (known after apply) 2026-03-08 00:02:14.914367 | orchestrator | + stop_before_destroy = false 2026-03-08 00:02:14.914371 | orchestrator | + updated = (known after apply) 2026-03-08 00:02:14.914376 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-08 00:02:14.914381 | orchestrator | 2026-03-08 00:02:14.914385 | orchestrator | + block_device { 2026-03-08 00:02:14.914390 | orchestrator | + boot_index = 0 2026-03-08 00:02:14.914394 | orchestrator | + delete_on_termination = false 2026-03-08 00:02:14.914399 | orchestrator | + destination_type = "volume" 2026-03-08 00:02:14.914403 | orchestrator | + multiattach = false 2026-03-08 00:02:14.914408 | orchestrator | + source_type = "volume" 2026-03-08 00:02:14.914412 | orchestrator | + uuid = (known after apply) 2026-03-08 00:02:14.914417 | orchestrator | } 2026-03-08 00:02:14.914421 | orchestrator | 2026-03-08 00:02:14.914426 | orchestrator | + network { 2026-03-08 00:02:14.914430 | orchestrator | + access_network = false 2026-03-08 00:02:14.914435 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-08 00:02:14.914439 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-08 00:02:14.914444 | orchestrator | + mac = (known after apply) 2026-03-08 00:02:14.914448 | orchestrator | + name = (known after apply) 2026-03-08 00:02:14.914453 | orchestrator | + port = (known after apply) 2026-03-08 00:02:14.914457 | orchestrator | + uuid = (known after apply) 2026-03-08 00:02:14.914462 | orchestrator | } 2026-03-08 00:02:14.914466 | orchestrator | } 2026-03-08 00:02:14.914475 | orchestrator | 2026-03-08 00:02:14.914480 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-03-08 00:02:14.914484 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-08 00:02:14.914489 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-08 00:02:14.914493 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-08 00:02:14.914498 | orchestrator | + all_metadata = (known after apply) 2026-03-08 00:02:14.914502 | orchestrator | + all_tags = (known after apply) 2026-03-08 00:02:14.914507 | orchestrator | + availability_zone = "nova" 2026-03-08 00:02:14.914511 | orchestrator | + config_drive = true 2026-03-08 00:02:14.914516 | orchestrator | + created = (known after apply) 2026-03-08 00:02:14.914521 | orchestrator | + flavor_id = (known after apply) 2026-03-08 00:02:14.914525 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-08 00:02:14.914530 | orchestrator | + force_delete = false 2026-03-08 00:02:14.914534 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-08 00:02:14.914539 | orchestrator | + id = (known after apply) 2026-03-08 00:02:14.914543 | orchestrator | + image_id = (known after apply) 2026-03-08 00:02:14.914548 | orchestrator | + image_name = (known after apply) 2026-03-08 00:02:14.914552 | orchestrator | + key_pair = "testbed" 2026-03-08 00:02:14.914557 | orchestrator | + name = "testbed-node-5" 2026-03-08 00:02:14.914561 | orchestrator | + power_state = "active" 2026-03-08 00:02:14.914566 | orchestrator | + region = (known after apply) 2026-03-08 00:02:14.914570 | orchestrator | + security_groups = (known after apply) 2026-03-08 00:02:14.914575 | orchestrator | + stop_before_destroy = false 2026-03-08 00:02:14.914579 | orchestrator | + updated = (known after apply) 2026-03-08 00:02:14.914584 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-08 00:02:14.914588 | orchestrator | 2026-03-08 00:02:14.914593 | orchestrator | + block_device { 2026-03-08 00:02:14.914597 | orchestrator | + boot_index = 0 2026-03-08 00:02:14.914602 | orchestrator | + delete_on_termination = false 2026-03-08 00:02:14.914606 | orchestrator | + destination_type = "volume" 2026-03-08 00:02:14.914611 | orchestrator | + multiattach = false 2026-03-08 00:02:14.914615 | orchestrator | + source_type = "volume" 2026-03-08 00:02:14.914620 | orchestrator | + uuid = (known after apply) 2026-03-08 00:02:14.914625 | orchestrator | } 2026-03-08 00:02:14.914629 | orchestrator | 2026-03-08 00:02:14.914634 | orchestrator | + network { 2026-03-08 00:02:14.914638 | orchestrator | + access_network = false 2026-03-08 00:02:14.914646 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-08 00:02:14.914650 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-08 00:02:14.914687 | orchestrator | + mac = (known after apply) 2026-03-08 00:02:14.914696 | orchestrator | + name = (known after apply) 2026-03-08 00:02:14.914703 | orchestrator | + port = (known after apply) 2026-03-08 00:02:14.914712 | orchestrator | + uuid = (known after apply) 2026-03-08 00:02:14.914717 | orchestrator | } 2026-03-08 00:02:14.914722 | orchestrator | } 2026-03-08 00:02:14.914726 | orchestrator | 2026-03-08 00:02:14.914731 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-03-08 00:02:14.914736 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-03-08 00:02:14.914740 | orchestrator | + fingerprint = (known after apply) 2026-03-08 00:02:14.914745 | orchestrator | + id = (known after apply) 2026-03-08 00:02:14.914749 | orchestrator | + name = "testbed" 2026-03-08 00:02:14.914754 | orchestrator | + private_key = (sensitive value) 2026-03-08 00:02:14.914758 | orchestrator | + public_key = (known after apply) 2026-03-08 00:02:14.914763 | orchestrator | + region = (known after apply) 2026-03-08 00:02:14.914767 | orchestrator | + user_id = (known after apply) 2026-03-08 00:02:14.914772 | orchestrator | } 2026-03-08 00:02:14.914777 | orchestrator | 2026-03-08 00:02:14.914781 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-03-08 00:02:14.914786 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-08 00:02:14.914795 | orchestrator | + device = (known after apply) 2026-03-08 00:02:14.914799 | orchestrator | + id = (known after apply) 2026-03-08 00:02:14.914804 | orchestrator | + instance_id = (known after apply) 2026-03-08 00:02:14.914808 | orchestrator | + region = (known after apply) 2026-03-08 00:02:14.914817 | orchestrator | + volume_id = (known after apply) 2026-03-08 00:02:14.914822 | orchestrator | } 2026-03-08 00:02:14.914827 | orchestrator | 2026-03-08 00:02:14.914832 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-03-08 00:02:14.914837 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-08 00:02:14.914841 | orchestrator | + device = (known after apply) 2026-03-08 00:02:14.914846 | orchestrator | + id = (known after apply) 2026-03-08 00:02:14.914851 | orchestrator | + instance_id = (known after apply) 2026-03-08 00:02:14.914855 | orchestrator | + region = (known after apply) 2026-03-08 00:02:14.914860 | orchestrator | + volume_id = (known after apply) 2026-03-08 00:02:14.914865 | orchestrator | } 2026-03-08 00:02:14.914869 | orchestrator | 2026-03-08 00:02:14.914874 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-03-08 00:02:14.914878 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-08 00:02:14.914883 | orchestrator | + device = (known after apply) 2026-03-08 00:02:14.914887 | orchestrator | + id = (known after apply) 2026-03-08 00:02:14.914892 | orchestrator | + instance_id = (known after apply) 2026-03-08 00:02:14.914896 | orchestrator | + region = (known after apply) 2026-03-08 00:02:14.914901 | orchestrator | + volume_id = (known after apply) 2026-03-08 00:02:14.914905 | orchestrator | } 2026-03-08 00:02:14.914910 | orchestrator | 2026-03-08 00:02:14.914914 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-03-08 00:02:14.914919 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-08 00:02:14.914924 | orchestrator | + device = (known after apply) 2026-03-08 00:02:14.914928 | orchestrator | + id = (known after apply) 2026-03-08 00:02:14.914933 | orchestrator | + instance_id = (known after apply) 2026-03-08 00:02:14.914937 | orchestrator | + region = (known after apply) 2026-03-08 00:02:14.914941 | orchestrator | + volume_id = (known after apply) 2026-03-08 00:02:14.914946 | orchestrator | } 2026-03-08 00:02:14.914951 | orchestrator | 2026-03-08 00:02:14.914955 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-03-08 00:02:14.914960 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-08 00:02:14.914964 | orchestrator | + device = (known after apply) 2026-03-08 00:02:14.914969 | orchestrator | + id = (known after apply) 2026-03-08 00:02:14.914973 | orchestrator | + instance_id = (known after apply) 2026-03-08 00:02:14.914978 | orchestrator | + region = (known after apply) 2026-03-08 00:02:14.914982 | orchestrator | + volume_id = (known after apply) 2026-03-08 00:02:14.914987 | orchestrator | } 2026-03-08 00:02:14.914991 | orchestrator | 2026-03-08 00:02:14.914996 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-03-08 00:02:14.915001 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-08 00:02:14.915005 | orchestrator | + device = (known after apply) 2026-03-08 00:02:14.915010 | orchestrator | + id = (known after apply) 2026-03-08 00:02:14.915014 | orchestrator | + instance_id = (known after apply) 2026-03-08 00:02:14.915019 | orchestrator | + region = (known after apply) 2026-03-08 00:02:14.915023 | orchestrator | + volume_id = (known after apply) 2026-03-08 00:02:14.915028 | orchestrator | } 2026-03-08 00:02:14.915032 | orchestrator | 2026-03-08 00:02:14.915037 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-03-08 00:02:14.915042 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-08 00:02:14.915046 | orchestrator | + device = (known after apply) 2026-03-08 00:02:14.915051 | orchestrator | + id = (known after apply) 2026-03-08 00:02:14.915055 | orchestrator | + instance_id = (known after apply) 2026-03-08 00:02:14.915060 | orchestrator | + region = (known after apply) 2026-03-08 00:02:14.915068 | orchestrator | + volume_id = (known after apply) 2026-03-08 00:02:14.915072 | orchestrator | } 2026-03-08 00:02:14.915077 | orchestrator | 2026-03-08 00:02:14.915082 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-03-08 00:02:14.915086 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-08 00:02:14.915091 | orchestrator | + device = (known after apply) 2026-03-08 00:02:14.915095 | orchestrator | + id = (known after apply) 2026-03-08 00:02:14.915100 | orchestrator | + instance_id = (known after apply) 2026-03-08 00:02:14.915105 | orchestrator | + region = (known after apply) 2026-03-08 00:02:14.915109 | orchestrator | + volume_id = (known after apply) 2026-03-08 00:02:14.915114 | orchestrator | } 2026-03-08 00:02:14.915118 | orchestrator | 2026-03-08 00:02:14.915123 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-03-08 00:02:14.915127 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-08 00:02:14.915132 | orchestrator | + device = (known after apply) 2026-03-08 00:02:14.915137 | orchestrator | + id = (known after apply) 2026-03-08 00:02:14.915141 | orchestrator | + instance_id = (known after apply) 2026-03-08 00:02:14.915146 | orchestrator | + region = (known after apply) 2026-03-08 00:02:14.915150 | orchestrator | + volume_id = (known after apply) 2026-03-08 00:02:14.915155 | orchestrator | } 2026-03-08 00:02:14.915159 | orchestrator | 2026-03-08 00:02:14.915169 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-03-08 00:02:14.915175 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-03-08 00:02:14.915180 | orchestrator | + fixed_ip = (known after apply) 2026-03-08 00:02:14.915184 | orchestrator | + floating_ip = (known after apply) 2026-03-08 00:02:14.915189 | orchestrator | + id = (known after apply) 2026-03-08 00:02:14.915193 | orchestrator | + port_id = (known after apply) 2026-03-08 00:02:14.915198 | orchestrator | + region = (known after apply) 2026-03-08 00:02:14.915202 | orchestrator | } 2026-03-08 00:02:14.915207 | orchestrator | 2026-03-08 00:02:14.915211 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-03-08 00:02:14.915216 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-03-08 00:02:14.915221 | orchestrator | + address = (known after apply) 2026-03-08 00:02:14.915225 | orchestrator | + all_tags = (known after apply) 2026-03-08 00:02:14.915233 | orchestrator | + dns_domain = (known after apply) 2026-03-08 00:02:14.915237 | orchestrator | + dns_name = (known after apply) 2026-03-08 00:02:14.915242 | orchestrator | + fixed_ip = (known after apply) 2026-03-08 00:02:14.915246 | orchestrator | + id = (known after apply) 2026-03-08 00:02:14.915251 | orchestrator | + pool = "public" 2026-03-08 00:02:14.915256 | orchestrator | + port_id = (known after apply) 2026-03-08 00:02:14.915260 | orchestrator | + region = (known after apply) 2026-03-08 00:02:14.915265 | orchestrator | + subnet_id = (known after apply) 2026-03-08 00:02:14.915269 | orchestrator | + tenant_id = (known after apply) 2026-03-08 00:02:14.915274 | orchestrator | } 2026-03-08 00:02:14.915279 | orchestrator | 2026-03-08 00:02:14.915283 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-03-08 00:02:14.915288 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-03-08 00:02:14.915293 | orchestrator | + admin_state_up = (known after apply) 2026-03-08 00:02:14.915297 | orchestrator | + all_tags = (known after apply) 2026-03-08 00:02:14.915302 | orchestrator | + availability_zone_hints = [ 2026-03-08 00:02:14.915306 | orchestrator | + "nova", 2026-03-08 00:02:14.915311 | orchestrator | ] 2026-03-08 00:02:14.915315 | orchestrator | + dns_domain = (known after apply) 2026-03-08 00:02:14.915320 | orchestrator | + external = (known after apply) 2026-03-08 00:02:14.915325 | orchestrator | + id = (known after apply) 2026-03-08 00:02:14.915329 | orchestrator | + mtu = (known after apply) 2026-03-08 00:02:14.915334 | orchestrator | + name = "net-testbed-management" 2026-03-08 00:02:14.915339 | orchestrator | + port_security_enabled = (known after apply) 2026-03-08 00:02:14.915346 | orchestrator | + qos_policy_id = (known after apply) 2026-03-08 00:02:14.915351 | orchestrator | + region = (known after apply) 2026-03-08 00:02:14.915356 | orchestrator | + shared = (known after apply) 2026-03-08 00:02:14.915360 | orchestrator | + tenant_id = (known after apply) 2026-03-08 00:02:14.915365 | orchestrator | + transparent_vlan = (known after apply) 2026-03-08 00:02:14.915369 | orchestrator | 2026-03-08 00:02:14.915374 | orchestrator | + segments (known after apply) 2026-03-08 00:02:14.915379 | orchestrator | } 2026-03-08 00:02:14.915384 | orchestrator | 2026-03-08 00:02:14.915389 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-03-08 00:02:14.915394 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-03-08 00:02:14.915399 | orchestrator | + admin_state_up = (known after apply) 2026-03-08 00:02:14.915403 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-08 00:02:14.915408 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-08 00:02:14.915412 | orchestrator | + all_tags = (known after apply) 2026-03-08 00:02:14.915417 | orchestrator | + device_id = (known after apply) 2026-03-08 00:02:14.915422 | orchestrator | + device_owner = (known after apply) 2026-03-08 00:02:14.915426 | orchestrator | + dns_assignment = (known after apply) 2026-03-08 00:02:14.915431 | orchestrator | + dns_name = (known after apply) 2026-03-08 00:02:14.915435 | orchestrator | + id = (known after apply) 2026-03-08 00:02:14.915440 | orchestrator | + mac_address = (known after apply) 2026-03-08 00:02:14.915444 | orchestrator | + network_id = (known after apply) 2026-03-08 00:02:14.915449 | orchestrator | + port_security_enabled = (known after apply) 2026-03-08 00:02:14.915453 | orchestrator | + qos_policy_id = (known after apply) 2026-03-08 00:02:14.915458 | orchestrator | + region = (known after apply) 2026-03-08 00:02:14.915463 | orchestrator | + security_group_ids = (known after apply) 2026-03-08 00:02:14.915467 | orchestrator | + tenant_id = (known after apply) 2026-03-08 00:02:14.915472 | orchestrator | 2026-03-08 00:02:14.915476 | orchestrator | + allowed_address_pairs { 2026-03-08 00:02:14.915481 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-08 00:02:14.915486 | orchestrator | } 2026-03-08 00:02:14.915490 | orchestrator | 2026-03-08 00:02:14.915495 | orchestrator | + binding (known after apply) 2026-03-08 00:02:14.915499 | orchestrator | 2026-03-08 00:02:14.915504 | orchestrator | + fixed_ip { 2026-03-08 00:02:14.915509 | orchestrator | + ip_address = "192.168.16.5" 2026-03-08 00:02:14.915513 | orchestrator | + subnet_id = (known after apply) 2026-03-08 00:02:14.915518 | orchestrator | } 2026-03-08 00:02:14.915522 | orchestrator | } 2026-03-08 00:02:14.915527 | orchestrator | 2026-03-08 00:02:14.915532 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-03-08 00:02:14.915536 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-08 00:02:14.915541 | orchestrator | + admin_state_up = (known after apply) 2026-03-08 00:02:14.915545 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-08 00:02:14.915550 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-08 00:02:14.915555 | orchestrator | + all_tags = (known after apply) 2026-03-08 00:02:14.915559 | orchestrator | + device_id = (known after apply) 2026-03-08 00:02:14.915564 | orchestrator | + device_owner = (known after apply) 2026-03-08 00:02:14.915568 | orchestrator | + dns_assignment = (known after apply) 2026-03-08 00:02:14.915573 | orchestrator | + dns_name = (known after apply) 2026-03-08 00:02:14.915578 | orchestrator | + id = (known after apply) 2026-03-08 00:02:14.915582 | orchestrator | + mac_address = (known after apply) 2026-03-08 00:02:14.915587 | orchestrator | + network_id = (known after apply) 2026-03-08 00:02:14.915591 | orchestrator | + port_security_enabled = (known after apply) 2026-03-08 00:02:14.915596 | orchestrator | + qos_policy_id = (known after apply) 2026-03-08 00:02:14.915600 | orchestrator | + region = (known after apply) 2026-03-08 00:02:14.915611 | orchestrator | + security_group_ids = (known after apply) 2026-03-08 00:02:14.915615 | orchestrator | + tenant_id = (known after apply) 2026-03-08 00:02:14.915620 | orchestrator | 2026-03-08 00:02:14.915625 | orchestrator | + allowed_address_pairs { 2026-03-08 00:02:14.915629 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-08 00:02:14.915634 | orchestrator | } 2026-03-08 00:02:14.915639 | orchestrator | + allowed_address_pairs { 2026-03-08 00:02:14.915643 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-08 00:02:14.915648 | orchestrator | } 2026-03-08 00:02:14.915652 | orchestrator | + allowed_address_pairs { 2026-03-08 00:02:14.915690 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-08 00:02:14.915696 | orchestrator | } 2026-03-08 00:02:14.915700 | orchestrator | 2026-03-08 00:02:14.915705 | orchestrator | + binding (known after apply) 2026-03-08 00:02:14.915709 | orchestrator | 2026-03-08 00:02:14.915714 | orchestrator | + fixed_ip { 2026-03-08 00:02:14.915719 | orchestrator | + ip_address = "192.168.16.10" 2026-03-08 00:02:14.915723 | orchestrator | + subnet_id = (known after apply) 2026-03-08 00:02:14.915727 | orchestrator | } 2026-03-08 00:02:14.915732 | orchestrator | } 2026-03-08 00:02:14.915737 | orchestrator | 2026-03-08 00:02:14.915741 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-03-08 00:02:14.915746 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-08 00:02:14.915753 | orchestrator | + admin_state_up = (known after apply) 2026-03-08 00:02:14.915758 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-08 00:02:14.915763 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-08 00:02:14.915767 | orchestrator | + all_tags = (known after apply) 2026-03-08 00:02:14.915772 | orchestrator | + device_id = (known after apply) 2026-03-08 00:02:14.915777 | orchestrator | + device_owner = (known after apply) 2026-03-08 00:02:14.915781 | orchestrator | + dns_assignment = (known after apply) 2026-03-08 00:02:14.915786 | orchestrator | + dns_name = (known after apply) 2026-03-08 00:02:14.915790 | orchestrator | + id = (known after apply) 2026-03-08 00:02:14.915795 | orchestrator | + mac_address = (known after apply) 2026-03-08 00:02:14.915799 | orchestrator | + network_id = (known after apply) 2026-03-08 00:02:14.915804 | orchestrator | + port_security_enabled = (known after apply) 2026-03-08 00:02:14.915808 | orchestrator | + qos_policy_id = (known after apply) 2026-03-08 00:02:14.915813 | orchestrator | + region = (known after apply) 2026-03-08 00:02:14.915817 | orchestrator | + security_group_ids = (known after apply) 2026-03-08 00:02:14.915822 | orchestrator | + tenant_id = (known after apply) 2026-03-08 00:02:14.915826 | orchestrator | 2026-03-08 00:02:14.915831 | orchestrator | + allowed_address_pairs { 2026-03-08 00:02:14.915836 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-08 00:02:14.915840 | orchestrator | } 2026-03-08 00:02:14.915845 | orchestrator | + allowed_address_pairs { 2026-03-08 00:02:14.915849 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-08 00:02:14.915854 | orchestrator | } 2026-03-08 00:02:14.915859 | orchestrator | + allowed_address_pairs { 2026-03-08 00:02:14.915863 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-08 00:02:14.915868 | orchestrator | } 2026-03-08 00:02:14.915872 | orchestrator | 2026-03-08 00:02:14.915877 | orchestrator | + binding (known after apply) 2026-03-08 00:02:14.915881 | orchestrator | 2026-03-08 00:02:14.915886 | orchestrator | + fixed_ip { 2026-03-08 00:02:14.915890 | orchestrator | + ip_address = "192.168.16.11" 2026-03-08 00:02:14.915895 | orchestrator | + subnet_id = (known after apply) 2026-03-08 00:02:14.915900 | orchestrator | } 2026-03-08 00:02:14.915904 | orchestrator | } 2026-03-08 00:02:14.915909 | orchestrator | 2026-03-08 00:02:14.915913 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-03-08 00:02:14.915918 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-08 00:02:14.915922 | orchestrator | + admin_state_up = (known after apply) 2026-03-08 00:02:14.915927 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-08 00:02:14.915932 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-08 00:02:14.915936 | orchestrator | + all_tags = (known after apply) 2026-03-08 00:02:14.915945 | orchestrator | + device_id = (known after apply) 2026-03-08 00:02:14.915949 | orchestrator | + device_owner = (known after apply) 2026-03-08 00:02:14.915954 | orchestrator | + dns_assignment = (known after apply) 2026-03-08 00:02:14.915959 | orchestrator | + dns_name = (known after apply) 2026-03-08 00:02:14.915963 | orchestrator | + id = (known after apply) 2026-03-08 00:02:14.915968 | orchestrator | + mac_address = (known after apply) 2026-03-08 00:02:14.915972 | orchestrator | + network_id = (known after apply) 2026-03-08 00:02:14.915977 | orchestrator | + port_security_enabled = (known after apply) 2026-03-08 00:02:14.915993 | orchestrator | + qos_policy_id = (known after apply) 2026-03-08 00:02:14.915998 | orchestrator | + region = (known after apply) 2026-03-08 00:02:14.916003 | orchestrator | + security_group_ids = (known after apply) 2026-03-08 00:02:14.916008 | orchestrator | + tenant_id = (known after apply) 2026-03-08 00:02:14.916012 | orchestrator | 2026-03-08 00:02:14.916017 | orchestrator | + allowed_address_pairs { 2026-03-08 00:02:14.916021 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-08 00:02:14.916026 | orchestrator | } 2026-03-08 00:02:14.916030 | orchestrator | + allowed_address_pairs { 2026-03-08 00:02:14.916035 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-08 00:02:14.916040 | orchestrator | } 2026-03-08 00:02:14.916044 | orchestrator | + allowed_address_pairs { 2026-03-08 00:02:14.916049 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-08 00:02:14.916053 | orchestrator | } 2026-03-08 00:02:14.916058 | orchestrator | 2026-03-08 00:02:14.916062 | orchestrator | + binding (known after apply) 2026-03-08 00:02:14.916067 | orchestrator | 2026-03-08 00:02:14.916071 | orchestrator | + fixed_ip { 2026-03-08 00:02:14.916076 | orchestrator | + ip_address = "192.168.16.12" 2026-03-08 00:02:14.916080 | orchestrator | + subnet_id = (known after apply) 2026-03-08 00:02:14.916085 | orchestrator | } 2026-03-08 00:02:14.916089 | orchestrator | } 2026-03-08 00:02:14.916094 | orchestrator | 2026-03-08 00:02:14.916098 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-03-08 00:02:14.916103 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-08 00:02:14.916107 | orchestrator | + admin_state_up = (known after apply) 2026-03-08 00:02:14.916112 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-08 00:02:14.916117 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-08 00:02:14.916121 | orchestrator | + all_tags = (known after apply) 2026-03-08 00:02:14.916126 | orchestrator | + device_id = (known after apply) 2026-03-08 00:02:14.916130 | orchestrator | + device_owner = (known after apply) 2026-03-08 00:02:14.916135 | orchestrator | + dns_assignment = (known after apply) 2026-03-08 00:02:14.916139 | orchestrator | + dns_name = (known after apply) 2026-03-08 00:02:14.916144 | orchestrator | + id = (known after apply) 2026-03-08 00:02:14.916148 | orchestrator | + mac_address = (known after apply) 2026-03-08 00:02:14.916153 | orchestrator | + network_id = (known after apply) 2026-03-08 00:02:14.916157 | orchestrator | + port_security_enabled = (known after apply) 2026-03-08 00:02:14.916165 | orchestrator | + qos_policy_id = (known after apply) 2026-03-08 00:02:14.916170 | orchestrator | + region = (known after apply) 2026-03-08 00:02:14.916175 | orchestrator | + security_group_ids = (known after apply) 2026-03-08 00:02:14.916179 | orchestrator | + tenant_id = (known after apply) 2026-03-08 00:02:14.916184 | orchestrator | 2026-03-08 00:02:14.916189 | orchestrator | + allowed_address_pairs { 2026-03-08 00:02:14.916193 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-08 00:02:14.916198 | orchestrator | } 2026-03-08 00:02:14.916202 | orchestrator | + allowed_address_pairs { 2026-03-08 00:02:14.916206 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-08 00:02:14.916210 | orchestrator | } 2026-03-08 00:02:14.916214 | orchestrator | + allowed_address_pairs { 2026-03-08 00:02:14.916218 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-08 00:02:14.916222 | orchestrator | } 2026-03-08 00:02:14.916226 | orchestrator | 2026-03-08 00:02:14.916234 | orchestrator | + binding (known after apply) 2026-03-08 00:02:14.916238 | orchestrator | 2026-03-08 00:02:14.916242 | orchestrator | + fixed_ip { 2026-03-08 00:02:14.916246 | orchestrator | + ip_address = "192.168.16.13" 2026-03-08 00:02:14.916250 | orchestrator | + subnet_id = (known after apply) 2026-03-08 00:02:14.916254 | orchestrator | } 2026-03-08 00:02:14.916258 | orchestrator | } 2026-03-08 00:02:14.916262 | orchestrator | 2026-03-08 00:02:14.916266 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-03-08 00:02:14.916271 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-08 00:02:14.916275 | orchestrator | + admin_state_up = (known after apply) 2026-03-08 00:02:14.916279 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-08 00:02:14.916283 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-08 00:02:14.916287 | orchestrator | + all_tags = (known after apply) 2026-03-08 00:02:14.916291 | orchestrator | + device_id = (known after apply) 2026-03-08 00:02:14.916295 | orchestrator | + device_owner = (known after apply) 2026-03-08 00:02:14.916299 | orchestrator | + dns_assignment = (known after apply) 2026-03-08 00:02:14.916303 | orchestrator | + dns_name = (known after apply) 2026-03-08 00:02:14.916310 | orchestrator | + id = (known after apply) 2026-03-08 00:02:14.916314 | orchestrator | + mac_address = (known after apply) 2026-03-08 00:02:14.916318 | orchestrator | + network_id = (known after apply) 2026-03-08 00:02:14.916322 | orchestrator | + port_security_enabled = (known after apply) 2026-03-08 00:02:14.916327 | orchestrator | + qos_policy_id = (known after apply) 2026-03-08 00:02:14.916331 | orchestrator | + region = (known after apply) 2026-03-08 00:02:14.916335 | orchestrator | + security_group_ids = (known after apply) 2026-03-08 00:02:14.916339 | orchestrator | + tenant_id = (known after apply) 2026-03-08 00:02:14.916343 | orchestrator | 2026-03-08 00:02:14.916348 | orchestrator | + allowed_address_pairs { 2026-03-08 00:02:14.916354 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-08 00:02:14.916358 | orchestrator | } 2026-03-08 00:02:14.916362 | orchestrator | + allowed_address_pairs { 2026-03-08 00:02:14.916366 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-08 00:02:14.916371 | orchestrator | } 2026-03-08 00:02:14.916375 | orchestrator | + allowed_address_pairs { 2026-03-08 00:02:14.916379 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-08 00:02:14.916383 | orchestrator | } 2026-03-08 00:02:14.916387 | orchestrator | 2026-03-08 00:02:14.916391 | orchestrator | + binding (known after apply) 2026-03-08 00:02:14.916395 | orchestrator | 2026-03-08 00:02:14.916399 | orchestrator | + fixed_ip { 2026-03-08 00:02:14.916403 | orchestrator | + ip_address = "192.168.16.14" 2026-03-08 00:02:14.916408 | orchestrator | + subnet_id = (known after apply) 2026-03-08 00:02:14.916412 | orchestrator | } 2026-03-08 00:02:14.916416 | orchestrator | } 2026-03-08 00:02:14.916420 | orchestrator | 2026-03-08 00:02:14.916424 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-03-08 00:02:14.916428 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-08 00:02:14.916432 | orchestrator | + admin_state_up = (known after apply) 2026-03-08 00:02:14.916436 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-08 00:02:14.916440 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-08 00:02:14.916445 | orchestrator | + all_tags = (known after apply) 2026-03-08 00:02:14.916449 | orchestrator | + device_id = (known after apply) 2026-03-08 00:02:14.916453 | orchestrator | + device_owner = (known after apply) 2026-03-08 00:02:14.916457 | orchestrator | + dns_assignment = (known after apply) 2026-03-08 00:02:14.916461 | orchestrator | + dns_name = (known after apply) 2026-03-08 00:02:14.916465 | orchestrator | + id = (known after apply) 2026-03-08 00:02:14.916469 | orchestrator | + mac_address = (known after apply) 2026-03-08 00:02:14.916473 | orchestrator | + network_id = (known after apply) 2026-03-08 00:02:14.916477 | orchestrator | + port_security_enabled = (known after apply) 2026-03-08 00:02:14.916482 | orchestrator | + qos_policy_id = (known after apply) 2026-03-08 00:02:14.916490 | orchestrator | + region = (known after apply) 2026-03-08 00:02:14.916495 | orchestrator | + security_group_ids = (known after apply) 2026-03-08 00:02:14.916499 | orchestrator | + tenant_id = (known after apply) 2026-03-08 00:02:14.916503 | orchestrator | 2026-03-08 00:02:14.916507 | orchestrator | + allowed_address_pairs { 2026-03-08 00:02:14.916512 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-08 00:02:14.916516 | orchestrator | } 2026-03-08 00:02:14.916520 | orchestrator | + allowed_address_pairs { 2026-03-08 00:02:14.916524 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-08 00:02:14.916528 | orchestrator | } 2026-03-08 00:02:14.916532 | orchestrator | + allowed_address_pairs { 2026-03-08 00:02:14.916536 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-08 00:02:14.916540 | orchestrator | } 2026-03-08 00:02:14.916545 | orchestrator | 2026-03-08 00:02:14.916549 | orchestrator | + binding (known after apply) 2026-03-08 00:02:14.916553 | orchestrator | 2026-03-08 00:02:14.916557 | orchestrator | + fixed_ip { 2026-03-08 00:02:14.916561 | orchestrator | + ip_address = "192.168.16.15" 2026-03-08 00:02:14.916565 | orchestrator | + subnet_id = (known after apply) 2026-03-08 00:02:14.916569 | orchestrator | } 2026-03-08 00:02:14.916573 | orchestrator | } 2026-03-08 00:02:14.916577 | orchestrator | 2026-03-08 00:02:14.916581 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-03-08 00:02:14.916586 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-03-08 00:02:14.916590 | orchestrator | + force_destroy = false 2026-03-08 00:02:14.916594 | orchestrator | + id = (known after apply) 2026-03-08 00:02:14.916598 | orchestrator | + port_id = (known after apply) 2026-03-08 00:02:14.916602 | orchestrator | + region = (known after apply) 2026-03-08 00:02:14.916606 | orchestrator | + router_id = (known after apply) 2026-03-08 00:02:14.916610 | orchestrator | + subnet_id = (known after apply) 2026-03-08 00:02:14.916614 | orchestrator | } 2026-03-08 00:02:14.916618 | orchestrator | 2026-03-08 00:02:14.916622 | orchestrator | # openstack_networking_router_v2.router will be created 2026-03-08 00:02:14.916626 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-03-08 00:02:14.916631 | orchestrator | + admin_state_up = (known after apply) 2026-03-08 00:02:14.916635 | orchestrator | + all_tags = (known after apply) 2026-03-08 00:02:14.916641 | orchestrator | + availability_zone_hints = [ 2026-03-08 00:02:14.916646 | orchestrator | + "nova", 2026-03-08 00:02:14.916650 | orchestrator | ] 2026-03-08 00:02:14.916654 | orchestrator | + distributed = (known after apply) 2026-03-08 00:02:14.916672 | orchestrator | + enable_snat = (known after apply) 2026-03-08 00:02:14.916676 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-03-08 00:02:14.916680 | orchestrator | + external_qos_policy_id = (known after apply) 2026-03-08 00:02:14.916684 | orchestrator | + id = (known after apply) 2026-03-08 00:02:14.916689 | orchestrator | + name = "testbed" 2026-03-08 00:02:14.916693 | orchestrator | + region = (known after apply) 2026-03-08 00:02:14.916697 | orchestrator | + tenant_id = (known after apply) 2026-03-08 00:02:14.916701 | orchestrator | 2026-03-08 00:02:14.916705 | orchestrator | + external_fixed_ip (known after apply) 2026-03-08 00:02:14.916709 | orchestrator | } 2026-03-08 00:02:14.916713 | orchestrator | 2026-03-08 00:02:14.916718 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-03-08 00:02:14.916722 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-03-08 00:02:14.916726 | orchestrator | + description = "ssh" 2026-03-08 00:02:14.916730 | orchestrator | + direction = "ingress" 2026-03-08 00:02:14.916734 | orchestrator | + ethertype = "IPv4" 2026-03-08 00:02:14.916738 | orchestrator | + id = (known after apply) 2026-03-08 00:02:14.916742 | orchestrator | + port_range_max = 22 2026-03-08 00:02:14.916747 | orchestrator | + port_range_min = 22 2026-03-08 00:02:14.916751 | orchestrator | + protocol = "tcp" 2026-03-08 00:02:14.916755 | orchestrator | + region = (known after apply) 2026-03-08 00:02:14.916765 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-08 00:02:14.916770 | orchestrator | + remote_group_id = (known after apply) 2026-03-08 00:02:14.916774 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-08 00:02:14.916778 | orchestrator | + security_group_id = (known after apply) 2026-03-08 00:02:14.916782 | orchestrator | + tenant_id = (known after apply) 2026-03-08 00:02:14.916786 | orchestrator | } 2026-03-08 00:02:14.916790 | orchestrator | 2026-03-08 00:02:14.916794 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-03-08 00:02:14.916798 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-03-08 00:02:14.916803 | orchestrator | + description = "wireguard" 2026-03-08 00:02:14.916807 | orchestrator | + direction = "ingress" 2026-03-08 00:02:14.916811 | orchestrator | + ethertype = "IPv4" 2026-03-08 00:02:14.916815 | orchestrator | + id = (known after apply) 2026-03-08 00:02:14.916819 | orchestrator | + port_range_max = 51820 2026-03-08 00:02:14.916823 | orchestrator | + port_range_min = 51820 2026-03-08 00:02:14.916827 | orchestrator | + protocol = "udp" 2026-03-08 00:02:14.916831 | orchestrator | + region = (known after apply) 2026-03-08 00:02:14.916835 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-08 00:02:14.916840 | orchestrator | + remote_group_id = (known after apply) 2026-03-08 00:02:14.916844 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-08 00:02:14.916848 | orchestrator | + security_group_id = (known after apply) 2026-03-08 00:02:14.916852 | orchestrator | + tenant_id = (known after apply) 2026-03-08 00:02:14.916856 | orchestrator | } 2026-03-08 00:02:14.916860 | orchestrator | 2026-03-08 00:02:14.916864 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-03-08 00:02:14.916868 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-03-08 00:02:14.916875 | orchestrator | + direction = "ingress" 2026-03-08 00:02:14.916879 | orchestrator | + ethertype = "IPv4" 2026-03-08 00:02:14.916883 | orchestrator | + id = (known after apply) 2026-03-08 00:02:14.916887 | orchestrator | + protocol = "tcp" 2026-03-08 00:02:14.916891 | orchestrator | + region = (known after apply) 2026-03-08 00:02:14.916896 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-08 00:02:14.916900 | orchestrator | + remote_group_id = (known after apply) 2026-03-08 00:02:14.916904 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-03-08 00:02:14.916908 | orchestrator | + security_group_id = (known after apply) 2026-03-08 00:02:14.916912 | orchestrator | + tenant_id = (known after apply) 2026-03-08 00:02:14.916916 | orchestrator | } 2026-03-08 00:02:14.916920 | orchestrator | 2026-03-08 00:02:14.916924 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-03-08 00:02:14.916928 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-03-08 00:02:14.916933 | orchestrator | + direction = "ingress" 2026-03-08 00:02:14.916937 | orchestrator | + ethertype = "IPv4" 2026-03-08 00:02:14.916941 | orchestrator | + id = (known after apply) 2026-03-08 00:02:14.916945 | orchestrator | + protocol = "udp" 2026-03-08 00:02:14.916949 | orchestrator | + region = (known after apply) 2026-03-08 00:02:14.916953 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-08 00:02:14.916957 | orchestrator | + remote_group_id = (known after apply) 2026-03-08 00:02:14.916961 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-03-08 00:02:14.916965 | orchestrator | + security_group_id = (known after apply) 2026-03-08 00:02:14.916969 | orchestrator | + tenant_id = (known after apply) 2026-03-08 00:02:14.916974 | orchestrator | } 2026-03-08 00:02:14.916978 | orchestrator | 2026-03-08 00:02:14.916982 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-03-08 00:02:14.916990 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-03-08 00:02:14.916994 | orchestrator | + direction = "ingress" 2026-03-08 00:02:14.916998 | orchestrator | + ethertype = "IPv4" 2026-03-08 00:02:14.917002 | orchestrator | + id = (known after apply) 2026-03-08 00:02:14.917007 | orchestrator | + protocol = "icmp" 2026-03-08 00:02:14.917011 | orchestrator | + region = (known after apply) 2026-03-08 00:02:14.917016 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-08 00:02:14.917020 | orchestrator | + remote_group_id = (known after apply) 2026-03-08 00:02:14.917024 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-08 00:02:14.917031 | orchestrator | + security_group_id = (known after apply) 2026-03-08 00:02:14.917036 | orchestrator | + tenant_id = (known after apply) 2026-03-08 00:02:14.917040 | orchestrator | } 2026-03-08 00:02:14.917044 | orchestrator | 2026-03-08 00:02:14.917048 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-03-08 00:02:14.917052 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-03-08 00:02:14.917056 | orchestrator | + direction = "ingress" 2026-03-08 00:02:14.917061 | orchestrator | + ethertype = "IPv4" 2026-03-08 00:02:14.917065 | orchestrator | + id = (known after apply) 2026-03-08 00:02:14.917069 | orchestrator | + protocol = "tcp" 2026-03-08 00:02:14.917073 | orchestrator | + region = (known after apply) 2026-03-08 00:02:14.917077 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-08 00:02:14.917081 | orchestrator | + remote_group_id = (known after apply) 2026-03-08 00:02:14.917085 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-08 00:02:14.917089 | orchestrator | + security_group_id = (known after apply) 2026-03-08 00:02:14.917094 | orchestrator | + tenant_id = (known after apply) 2026-03-08 00:02:14.917098 | orchestrator | } 2026-03-08 00:02:14.917102 | orchestrator | 2026-03-08 00:02:14.917106 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-03-08 00:02:14.917110 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-03-08 00:02:14.917114 | orchestrator | + direction = "ingress" 2026-03-08 00:02:14.917118 | orchestrator | + ethertype = "IPv4" 2026-03-08 00:02:14.917123 | orchestrator | + id = (known after apply) 2026-03-08 00:02:14.917127 | orchestrator | + protocol = "udp" 2026-03-08 00:02:14.917131 | orchestrator | + region = (known after apply) 2026-03-08 00:02:14.917135 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-08 00:02:14.917139 | orchestrator | + remote_group_id = (known after apply) 2026-03-08 00:02:14.917156 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-08 00:02:14.917160 | orchestrator | + security_group_id = (known after apply) 2026-03-08 00:02:14.917165 | orchestrator | + tenant_id = (known after apply) 2026-03-08 00:02:14.917169 | orchestrator | } 2026-03-08 00:02:14.917173 | orchestrator | 2026-03-08 00:02:14.917177 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-03-08 00:02:14.917181 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-03-08 00:02:14.917185 | orchestrator | + direction = "ingress" 2026-03-08 00:02:14.917189 | orchestrator | + ethertype = "IPv4" 2026-03-08 00:02:14.917193 | orchestrator | + id = (known after apply) 2026-03-08 00:02:14.917198 | orchestrator | + protocol = "icmp" 2026-03-08 00:02:14.917202 | orchestrator | + region = (known after apply) 2026-03-08 00:02:14.917206 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-08 00:02:14.917210 | orchestrator | + remote_group_id = (known after apply) 2026-03-08 00:02:14.917215 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-08 00:02:14.917219 | orchestrator | + security_group_id = (known after apply) 2026-03-08 00:02:14.917223 | orchestrator | + tenant_id = (known after apply) 2026-03-08 00:02:14.917232 | orchestrator | } 2026-03-08 00:02:14.917236 | orchestrator | 2026-03-08 00:02:14.917240 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-03-08 00:02:14.917244 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-03-08 00:02:14.917248 | orchestrator | + description = "vrrp" 2026-03-08 00:02:14.917252 | orchestrator | + direction = "ingress" 2026-03-08 00:02:14.917256 | orchestrator | + ethertype = "IPv4" 2026-03-08 00:02:14.917260 | orchestrator | + id = (known after apply) 2026-03-08 00:02:14.917265 | orchestrator | + protocol = "112" 2026-03-08 00:02:14.917269 | orchestrator | + region = (known after apply) 2026-03-08 00:02:14.917273 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-08 00:02:14.917277 | orchestrator | + remote_group_id = (known after apply) 2026-03-08 00:02:14.917281 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-08 00:02:14.917285 | orchestrator | + security_group_id = (known after apply) 2026-03-08 00:02:14.917289 | orchestrator | + tenant_id = (known after apply) 2026-03-08 00:02:14.917293 | orchestrator | } 2026-03-08 00:02:14.917297 | orchestrator | 2026-03-08 00:02:14.917301 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-03-08 00:02:14.917306 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-03-08 00:02:14.917310 | orchestrator | + all_tags = (known after apply) 2026-03-08 00:02:14.917314 | orchestrator | + description = "management security group" 2026-03-08 00:02:14.917318 | orchestrator | + id = (known after apply) 2026-03-08 00:02:14.917322 | orchestrator | + name = "testbed-management" 2026-03-08 00:02:14.917326 | orchestrator | + region = (known after apply) 2026-03-08 00:02:14.917330 | orchestrator | + stateful = (known after apply) 2026-03-08 00:02:14.917334 | orchestrator | + tenant_id = (known after apply) 2026-03-08 00:02:14.917338 | orchestrator | } 2026-03-08 00:02:14.917343 | orchestrator | 2026-03-08 00:02:14.917347 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-03-08 00:02:14.917351 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-03-08 00:02:14.917355 | orchestrator | + all_tags = (known after apply) 2026-03-08 00:02:14.917359 | orchestrator | + description = "node security group" 2026-03-08 00:02:14.917363 | orchestrator | + id = (known after apply) 2026-03-08 00:02:14.917367 | orchestrator | + name = "testbed-node" 2026-03-08 00:02:14.917371 | orchestrator | + region = (known after apply) 2026-03-08 00:02:14.917375 | orchestrator | + stateful = (known after apply) 2026-03-08 00:02:14.917379 | orchestrator | + tenant_id = (known after apply) 2026-03-08 00:02:14.917383 | orchestrator | } 2026-03-08 00:02:14.917387 | orchestrator | 2026-03-08 00:02:14.917392 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-03-08 00:02:14.917396 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-03-08 00:02:14.917400 | orchestrator | + all_tags = (known after apply) 2026-03-08 00:02:14.917404 | orchestrator | + cidr = "192.168.16.0/20" 2026-03-08 00:02:14.917408 | orchestrator | + dns_nameservers = [ 2026-03-08 00:02:14.917412 | orchestrator | + "8.8.8.8", 2026-03-08 00:02:14.917416 | orchestrator | + "9.9.9.9", 2026-03-08 00:02:14.917421 | orchestrator | ] 2026-03-08 00:02:14.917425 | orchestrator | + enable_dhcp = true 2026-03-08 00:02:14.917431 | orchestrator | + gateway_ip = (known after apply) 2026-03-08 00:02:14.917437 | orchestrator | + id = (known after apply) 2026-03-08 00:02:14.917442 | orchestrator | + ip_version = 4 2026-03-08 00:02:14.917446 | orchestrator | + ipv6_address_mode = (known after apply) 2026-03-08 00:02:14.917450 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-03-08 00:02:14.917454 | orchestrator | + name = "subnet-testbed-management" 2026-03-08 00:02:14.917458 | orchestrator | + network_id = (known after apply) 2026-03-08 00:02:14.917462 | orchestrator | + no_gateway = false 2026-03-08 00:02:14.917467 | orchestrator | + region = (known after apply) 2026-03-08 00:02:14.917471 | orchestrator | + service_types = (known after apply) 2026-03-08 00:02:14.917479 | orchestrator | + tenant_id = (known after apply) 2026-03-08 00:02:14.917483 | orchestrator | 2026-03-08 00:02:14.917487 | orchestrator | + allocation_pool { 2026-03-08 00:02:14.917491 | orchestrator | + end = "192.168.31.250" 2026-03-08 00:02:14.917495 | orchestrator | + start = "192.168.31.200" 2026-03-08 00:02:14.917499 | orchestrator | } 2026-03-08 00:02:14.917504 | orchestrator | } 2026-03-08 00:02:14.917508 | orchestrator | 2026-03-08 00:02:14.917512 | orchestrator | # terraform_data.image will be created 2026-03-08 00:02:14.917516 | orchestrator | + resource "terraform_data" "image" { 2026-03-08 00:02:14.917521 | orchestrator | + id = (known after apply) 2026-03-08 00:02:14.917525 | orchestrator | + input = "Ubuntu 24.04" 2026-03-08 00:02:14.917529 | orchestrator | + output = (known after apply) 2026-03-08 00:02:14.917534 | orchestrator | } 2026-03-08 00:02:14.917538 | orchestrator | 2026-03-08 00:02:14.917543 | orchestrator | # terraform_data.image_node will be created 2026-03-08 00:02:14.917547 | orchestrator | + resource "terraform_data" "image_node" { 2026-03-08 00:02:14.917551 | orchestrator | + id = (known after apply) 2026-03-08 00:02:14.917555 | orchestrator | + input = "Ubuntu 24.04" 2026-03-08 00:02:14.917559 | orchestrator | + output = (known after apply) 2026-03-08 00:02:14.917563 | orchestrator | } 2026-03-08 00:02:14.917567 | orchestrator | 2026-03-08 00:02:14.917571 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-03-08 00:02:14.917576 | orchestrator | 2026-03-08 00:02:14.917580 | orchestrator | Changes to Outputs: 2026-03-08 00:02:14.917584 | orchestrator | + manager_address = (sensitive value) 2026-03-08 00:02:14.917588 | orchestrator | + private_key = (sensitive value) 2026-03-08 00:02:15.129991 | orchestrator | terraform_data.image_node: Creating... 2026-03-08 00:02:15.130088 | orchestrator | terraform_data.image: Creating... 2026-03-08 00:02:15.130272 | orchestrator | terraform_data.image: Creation complete after 0s [id=2ead42da-5309-f444-529f-35036b3d9dba] 2026-03-08 00:02:15.130284 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=90f6d8bf-68f5-fffc-faf2-4fd26ccdd58d] 2026-03-08 00:02:15.156207 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-03-08 00:02:15.157147 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-03-08 00:02:15.159573 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-03-08 00:02:15.164091 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-03-08 00:02:15.164210 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-03-08 00:02:15.164268 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-03-08 00:02:15.171888 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-03-08 00:02:15.669958 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2026-03-08 00:02:15.972114 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-03-08 00:02:15.973455 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-03-08 00:02:15.973498 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-03-08 00:02:16.197178 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 1s [id=01caf39b-18b6-494a-b3fc-250b45f63686] 2026-03-08 00:02:16.204297 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-03-08 00:02:17.189691 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-03-08 00:02:17.299438 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-03-08 00:02:17.303183 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-03-08 00:02:17.355612 | orchestrator | data.openstack_images_image_v2.image: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-03-08 00:02:17.365292 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-03-08 00:02:18.191934 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=7967d3ca-786f-45fe-987a-d4834bbd0f34] 2026-03-08 00:02:18.201750 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-03-08 00:02:18.206199 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=9ae0b0ed839c956440c383ab2857381fc10d80b2] 2026-03-08 00:02:18.214480 | orchestrator | local_file.id_rsa_pub: Creating... 2026-03-08 00:02:18.218137 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=3e555129f7361c77e30c33d7b6cc46cf0661f3e9] 2026-03-08 00:02:18.224966 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-03-08 00:02:18.750561 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 4s [id=f69177ca-c9b7-4ecf-919e-98158e504d7d] 2026-03-08 00:02:18.761757 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-03-08 00:02:18.770718 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 4s [id=26ccb454-a8ab-488a-9282-a29bd19f440f] 2026-03-08 00:02:18.773721 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-03-08 00:02:18.808014 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=7bc88367-6aaf-4ded-8fa4-f9240096c464] 2026-03-08 00:02:18.811697 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-03-08 00:02:18.815763 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 4s [id=d9cf7a23-7f28-4003-9453-869e07fd4fea] 2026-03-08 00:02:18.821810 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-03-08 00:02:18.821851 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 4s [id=581ffd65-22a4-4ef2-934b-fe47abf1be5c] 2026-03-08 00:02:18.826072 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-03-08 00:02:19.171185 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 3s [id=a9abd44a-efa3-4fc9-810c-e4cec7375a49] 2026-03-08 00:02:19.177280 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-03-08 00:02:19.195818 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 3s [id=70953687-69fa-4056-8e35-7089ee1c64ea] 2026-03-08 00:02:19.197161 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 3s [id=2f73f377-a3b9-4553-a6d0-e21973e3a5e5] 2026-03-08 00:02:19.200040 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-03-08 00:02:19.406686 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 3s [id=1d4cf331-77e8-4e4e-b490-10f0636e01e9] 2026-03-08 00:02:21.582085 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 4s [id=6f151639-f215-41c0-9f83-a142594f7403] 2026-03-08 00:02:22.157182 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 3s [id=20d183f1-445d-49e2-ba1a-793a8137c84b] 2026-03-08 00:02:22.211552 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 3s [id=544edfd2-ddc4-4596-85df-1c9b9e7c3b59] 2026-03-08 00:02:22.242760 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 3s [id=c53b58f1-666b-45b2-9be0-abefaf2d6609] 2026-03-08 00:02:22.258944 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 3s [id=c560df89-ac9f-43eb-b629-a1334440ff2f] 2026-03-08 00:02:22.260737 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 3s [id=1404ed60-298a-412c-bd4f-1e90f35345d3] 2026-03-08 00:02:22.593244 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 4s [id=8d620b1f-631a-4788-ba15-482a8a36a7f3] 2026-03-08 00:02:22.739109 | orchestrator | openstack_networking_router_v2.router: Creation complete after 4s [id=fd513941-49d7-4565-a734-f619fe1f6e07] 2026-03-08 00:02:22.743045 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-03-08 00:02:22.745084 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-03-08 00:02:22.746057 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-03-08 00:02:22.971518 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=e8586294-fed1-4c5c-9afb-b38e926ea79e] 2026-03-08 00:02:22.980579 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-03-08 00:02:22.990101 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-03-08 00:02:22.991204 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-03-08 00:02:22.991264 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-03-08 00:02:22.993891 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-03-08 00:02:22.995983 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-03-08 00:02:23.029742 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=245534ab-81d9-4895-afe9-f82cc2982ac4] 2026-03-08 00:02:23.039023 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-03-08 00:02:23.039103 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-03-08 00:02:23.039705 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-03-08 00:02:23.165182 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=e5d0eddf-8524-4cfc-9c17-818df5659fa0] 2026-03-08 00:02:23.170520 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-03-08 00:02:23.193562 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 0s [id=d1634290-9ca6-4a48-83b2-b5ba304def59] 2026-03-08 00:02:23.203152 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-03-08 00:02:23.346113 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=70d5aee3-e4a4-4a59-b67a-bdf905c108c5] 2026-03-08 00:02:23.356601 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 0s [id=12e87a03-6053-4407-a7ab-c9c5b4355d06] 2026-03-08 00:02:23.361332 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-03-08 00:02:23.367898 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-03-08 00:02:23.514488 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=f55303d2-4823-4edc-bdc0-333c7e808453] 2026-03-08 00:02:23.530273 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=914a21cf-6c76-4b20-8046-a20c7062d98d] 2026-03-08 00:02:23.530485 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-03-08 00:02:23.554102 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-03-08 00:02:23.747621 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=f4e47fc5-1039-4bc3-9b5a-988062f674d8] 2026-03-08 00:02:23.760382 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-03-08 00:02:23.936533 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=668909fc-1037-42a5-9361-0464bb845b83] 2026-03-08 00:02:24.003173 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=f6432476-926f-498f-b1ee-1339c87ad40d] 2026-03-08 00:02:24.132047 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=f6a3a223-fdc2-4d12-8af7-be54147ed997] 2026-03-08 00:02:24.166050 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=de217344-89b9-4425-a590-a91c10815c9c] 2026-03-08 00:02:24.359643 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=e78f3253-ae89-4e26-aed7-e9c6a7e12fdf] 2026-03-08 00:02:24.388252 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 0s [id=261ea0c0-51fe-46d4-b84d-35ee59dc25ae] 2026-03-08 00:02:24.467410 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 0s [id=8fb68a12-f248-4aeb-9382-f8f643ba86cb] 2026-03-08 00:02:24.540670 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=a9bdd459-bc7d-4b26-9c23-6d0e31b86b86] 2026-03-08 00:02:24.719983 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 2s [id=0279c842-0b98-4a58-a1fb-feffc18c373a] 2026-03-08 00:02:25.284932 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 2s [id=5055023b-216d-4984-9829-59b393db1365] 2026-03-08 00:02:25.313169 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-03-08 00:02:25.322754 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-03-08 00:02:25.329314 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-03-08 00:02:25.329899 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-03-08 00:02:25.337957 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-03-08 00:02:25.338482 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-03-08 00:02:25.341999 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-03-08 00:02:28.311101 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 3s [id=ba7a324d-ae80-4407-b92e-63a70edf1a3c] 2026-03-08 00:02:28.318216 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-03-08 00:02:28.323991 | orchestrator | local_file.inventory: Creating... 2026-03-08 00:02:28.325065 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-03-08 00:02:28.331006 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=e6316abec78f29982b279ea74fdbd7ef42d2987b] 2026-03-08 00:02:28.332561 | orchestrator | local_file.inventory: Creation complete after 0s [id=3c86480993c4f007471d7bc14059a2301bf6e361] 2026-03-08 00:02:29.230740 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=ba7a324d-ae80-4407-b92e-63a70edf1a3c] 2026-03-08 00:02:35.328731 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-03-08 00:02:35.330951 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-03-08 00:02:35.331021 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-03-08 00:02:35.342437 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-03-08 00:02:35.342505 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-03-08 00:02:35.342522 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-03-08 00:02:45.337641 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-03-08 00:02:45.337806 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-03-08 00:02:45.337823 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-03-08 00:02:45.343272 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-03-08 00:02:45.343357 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-03-08 00:02:45.343364 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-03-08 00:02:46.189196 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 21s [id=1e6fc843-4391-486b-82b1-be1f11829063] 2026-03-08 00:02:46.328576 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 21s [id=575cd568-2c7a-4807-a8d6-2178c083594c] 2026-03-08 00:02:55.338323 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2026-03-08 00:02:55.338443 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2026-03-08 00:02:55.343904 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2026-03-08 00:02:55.344006 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-03-08 00:02:56.060559 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 31s [id=f0f5a026-4efc-48d4-bed9-43b977648d3b] 2026-03-08 00:02:56.210070 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 31s [id=ed2b4e68-0505-4e15-b29c-7b0da97bc0c6] 2026-03-08 00:02:56.214667 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 31s [id=6602620b-bf3e-4fed-bbde-cd2f77963c92] 2026-03-08 00:02:56.876035 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 32s [id=638e3450-38dc-461e-ac7e-1c2a1c49db7e] 2026-03-08 00:02:56.899335 | orchestrator | null_resource.node_semaphore: Creating... 2026-03-08 00:02:56.911797 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=2471102318841826474] 2026-03-08 00:02:56.912353 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-03-08 00:02:56.912411 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-03-08 00:02:56.912572 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-03-08 00:02:56.912771 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-03-08 00:02:56.913466 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-03-08 00:02:56.914343 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-03-08 00:02:56.914789 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-03-08 00:02:56.919863 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-03-08 00:02:56.922978 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-03-08 00:02:56.963825 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-03-08 00:03:00.380580 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 3s [id=f0f5a026-4efc-48d4-bed9-43b977648d3b/7bc88367-6aaf-4ded-8fa4-f9240096c464] 2026-03-08 00:03:00.382091 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 3s [id=1e6fc843-4391-486b-82b1-be1f11829063/1d4cf331-77e8-4e4e-b490-10f0636e01e9] 2026-03-08 00:03:00.409378 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 3s [id=575cd568-2c7a-4807-a8d6-2178c083594c/f69177ca-c9b7-4ecf-919e-98158e504d7d] 2026-03-08 00:03:00.710639 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 4s [id=f0f5a026-4efc-48d4-bed9-43b977648d3b/70953687-69fa-4056-8e35-7089ee1c64ea] 2026-03-08 00:03:00.719901 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 4s [id=575cd568-2c7a-4807-a8d6-2178c083594c/26ccb454-a8ab-488a-9282-a29bd19f440f] 2026-03-08 00:03:00.771307 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 4s [id=1e6fc843-4391-486b-82b1-be1f11829063/2f73f377-a3b9-4553-a6d0-e21973e3a5e5] 2026-03-08 00:03:06.804840 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 10s [id=f0f5a026-4efc-48d4-bed9-43b977648d3b/a9abd44a-efa3-4fc9-810c-e4cec7375a49] 2026-03-08 00:03:06.816434 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 10s [id=575cd568-2c7a-4807-a8d6-2178c083594c/d9cf7a23-7f28-4003-9453-869e07fd4fea] 2026-03-08 00:03:06.849255 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 10s [id=1e6fc843-4391-486b-82b1-be1f11829063/581ffd65-22a4-4ef2-934b-fe47abf1be5c] 2026-03-08 00:03:06.966231 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-03-08 00:03:16.972428 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-03-08 00:03:17.483294 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=0f3a4b79-fb18-4bad-b422-4784cefbd848] 2026-03-08 00:03:17.495798 | orchestrator | 2026-03-08 00:03:17.495884 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-03-08 00:03:17.495896 | orchestrator | 2026-03-08 00:03:17.495904 | orchestrator | Outputs: 2026-03-08 00:03:17.495912 | orchestrator | 2026-03-08 00:03:17.495931 | orchestrator | manager_address = 2026-03-08 00:03:17.495941 | orchestrator | private_key = 2026-03-08 00:03:17.572153 | orchestrator | ok: Runtime: 0:01:08.649814 2026-03-08 00:03:17.602347 | 2026-03-08 00:03:17.602570 | TASK [Create infrastructure (stable)] 2026-03-08 00:03:18.140147 | orchestrator | skipping: Conditional result was False 2026-03-08 00:03:18.159393 | 2026-03-08 00:03:18.159631 | TASK [Fetch manager address] 2026-03-08 00:03:18.679779 | orchestrator | ok 2026-03-08 00:03:18.687583 | 2026-03-08 00:03:18.687711 | TASK [Set manager_host address] 2026-03-08 00:03:18.762128 | orchestrator | ok 2026-03-08 00:03:18.772109 | 2026-03-08 00:03:18.772490 | LOOP [Update ansible collections] 2026-03-08 00:03:19.850062 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-08 00:03:19.850357 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-03-08 00:03:19.850399 | orchestrator | Starting galaxy collection install process 2026-03-08 00:03:19.850424 | orchestrator | Process install dependency map 2026-03-08 00:03:19.850447 | orchestrator | Starting collection install process 2026-03-08 00:03:19.850468 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons' 2026-03-08 00:03:19.850493 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons 2026-03-08 00:03:19.850539 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-03-08 00:03:19.850594 | orchestrator | ok: Item: commons Runtime: 0:00:00.711983 2026-03-08 00:03:20.947822 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-08 00:03:20.948327 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-03-08 00:03:20.948429 | orchestrator | Starting galaxy collection install process 2026-03-08 00:03:20.948482 | orchestrator | Process install dependency map 2026-03-08 00:03:20.948583 | orchestrator | Starting collection install process 2026-03-08 00:03:20.948629 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed06/.ansible/collections/ansible_collections/osism/services' 2026-03-08 00:03:20.948670 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/services 2026-03-08 00:03:20.948711 | orchestrator | osism.services:999.0.0 was installed successfully 2026-03-08 00:03:20.948774 | orchestrator | ok: Item: services Runtime: 0:00:00.789745 2026-03-08 00:03:20.965148 | 2026-03-08 00:03:20.965310 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-03-08 00:03:31.578031 | orchestrator | ok 2026-03-08 00:03:31.589221 | 2026-03-08 00:03:31.589372 | TASK [Wait a little longer for the manager so that everything is ready] 2026-03-08 00:04:31.640892 | orchestrator | ok 2026-03-08 00:04:31.648592 | 2026-03-08 00:04:31.648722 | TASK [Fetch manager ssh hostkey] 2026-03-08 00:04:33.218988 | orchestrator | Output suppressed because no_log was given 2026-03-08 00:04:33.228749 | 2026-03-08 00:04:33.228928 | TASK [Get ssh keypair from terraform environment] 2026-03-08 00:04:33.778923 | orchestrator | ok: Runtime: 0:00:00.007568 2026-03-08 00:04:33.794346 | 2026-03-08 00:04:33.794511 | TASK [Point out that the following task takes some time and does not give any output] 2026-03-08 00:04:33.841670 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-03-08 00:04:33.852094 | 2026-03-08 00:04:33.852231 | TASK [Run manager part 0] 2026-03-08 00:04:34.828878 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-08 00:04:34.879166 | orchestrator | 2026-03-08 00:04:34.879226 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-03-08 00:04:34.879240 | orchestrator | 2026-03-08 00:04:34.879264 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-03-08 00:04:36.897358 | orchestrator | ok: [testbed-manager] 2026-03-08 00:04:36.897448 | orchestrator | 2026-03-08 00:04:36.897508 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-03-08 00:04:36.897617 | orchestrator | 2026-03-08 00:04:36.897641 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-08 00:04:38.763445 | orchestrator | ok: [testbed-manager] 2026-03-08 00:04:38.763490 | orchestrator | 2026-03-08 00:04:38.763501 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-03-08 00:04:39.392301 | orchestrator | ok: [testbed-manager] 2026-03-08 00:04:39.392335 | orchestrator | 2026-03-08 00:04:39.392342 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-03-08 00:04:39.436636 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:04:39.436698 | orchestrator | 2026-03-08 00:04:39.436708 | orchestrator | TASK [Update package cache] **************************************************** 2026-03-08 00:04:39.467620 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:04:39.467669 | orchestrator | 2026-03-08 00:04:39.467680 | orchestrator | TASK [Install required packages] *********************************************** 2026-03-08 00:04:39.497434 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:04:39.497472 | orchestrator | 2026-03-08 00:04:39.497477 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-03-08 00:04:39.525444 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:04:39.525481 | orchestrator | 2026-03-08 00:04:39.525486 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-03-08 00:04:39.557355 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:04:39.557401 | orchestrator | 2026-03-08 00:04:39.557409 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-03-08 00:04:39.593635 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:04:39.593676 | orchestrator | 2026-03-08 00:04:39.593683 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-03-08 00:04:39.626706 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:04:39.626756 | orchestrator | 2026-03-08 00:04:39.626768 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-03-08 00:04:40.349894 | orchestrator | changed: [testbed-manager] 2026-03-08 00:04:40.349930 | orchestrator | 2026-03-08 00:04:40.349936 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-03-08 00:07:31.377846 | orchestrator | changed: [testbed-manager] 2026-03-08 00:07:31.377945 | orchestrator | 2026-03-08 00:07:31.377963 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-03-08 00:09:18.454687 | orchestrator | changed: [testbed-manager] 2026-03-08 00:09:18.455009 | orchestrator | 2026-03-08 00:09:18.455035 | orchestrator | TASK [Install required packages] *********************************************** 2026-03-08 00:09:38.946000 | orchestrator | changed: [testbed-manager] 2026-03-08 00:09:38.946132 | orchestrator | 2026-03-08 00:09:38.946152 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-03-08 00:09:47.698357 | orchestrator | changed: [testbed-manager] 2026-03-08 00:09:47.698433 | orchestrator | 2026-03-08 00:09:47.698442 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-03-08 00:09:47.751934 | orchestrator | ok: [testbed-manager] 2026-03-08 00:09:47.751974 | orchestrator | 2026-03-08 00:09:47.751981 | orchestrator | TASK [Get current user] ******************************************************** 2026-03-08 00:09:48.593163 | orchestrator | ok: [testbed-manager] 2026-03-08 00:09:48.593366 | orchestrator | 2026-03-08 00:09:48.593385 | orchestrator | TASK [Create venv directory] *************************************************** 2026-03-08 00:09:49.338502 | orchestrator | changed: [testbed-manager] 2026-03-08 00:09:49.338587 | orchestrator | 2026-03-08 00:09:49.338601 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-03-08 00:09:55.637836 | orchestrator | changed: [testbed-manager] 2026-03-08 00:09:55.637885 | orchestrator | 2026-03-08 00:09:55.637911 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-03-08 00:10:01.586728 | orchestrator | changed: [testbed-manager] 2026-03-08 00:10:01.586771 | orchestrator | 2026-03-08 00:10:01.586780 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-03-08 00:10:04.133136 | orchestrator | changed: [testbed-manager] 2026-03-08 00:10:04.133222 | orchestrator | 2026-03-08 00:10:04.133238 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-03-08 00:10:05.794073 | orchestrator | changed: [testbed-manager] 2026-03-08 00:10:05.794923 | orchestrator | 2026-03-08 00:10:05.794975 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-03-08 00:10:06.783379 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-03-08 00:10:06.783436 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-03-08 00:10:06.783450 | orchestrator | 2026-03-08 00:10:06.783461 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-03-08 00:10:06.826000 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-03-08 00:10:06.826111 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-03-08 00:10:06.826118 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-03-08 00:10:06.826122 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-03-08 00:10:12.305819 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-03-08 00:10:12.305889 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-03-08 00:10:12.305903 | orchestrator | 2026-03-08 00:10:12.305915 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-03-08 00:10:12.890698 | orchestrator | changed: [testbed-manager] 2026-03-08 00:10:12.890744 | orchestrator | 2026-03-08 00:10:12.890753 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-03-08 00:10:35.641673 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-03-08 00:10:35.641722 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-03-08 00:10:35.641732 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-03-08 00:10:35.641740 | orchestrator | 2026-03-08 00:10:35.641747 | orchestrator | TASK [Install local collections] *********************************************** 2026-03-08 00:10:37.918557 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-03-08 00:10:37.918673 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-03-08 00:10:37.918724 | orchestrator | 2026-03-08 00:10:37.918815 | orchestrator | PLAY [Create operator user] **************************************************** 2026-03-08 00:10:37.918830 | orchestrator | 2026-03-08 00:10:37.918841 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-08 00:10:39.342252 | orchestrator | ok: [testbed-manager] 2026-03-08 00:10:39.342306 | orchestrator | 2026-03-08 00:10:39.342314 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-03-08 00:10:39.392889 | orchestrator | ok: [testbed-manager] 2026-03-08 00:10:39.392948 | orchestrator | 2026-03-08 00:10:39.392962 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-03-08 00:10:39.461277 | orchestrator | ok: [testbed-manager] 2026-03-08 00:10:39.461317 | orchestrator | 2026-03-08 00:10:39.461324 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-03-08 00:10:40.184574 | orchestrator | changed: [testbed-manager] 2026-03-08 00:10:40.184662 | orchestrator | 2026-03-08 00:10:40.184681 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-03-08 00:10:40.915394 | orchestrator | changed: [testbed-manager] 2026-03-08 00:10:40.915490 | orchestrator | 2026-03-08 00:10:40.915508 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-03-08 00:10:42.236622 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-03-08 00:10:42.236661 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-03-08 00:10:42.236668 | orchestrator | 2026-03-08 00:10:42.236693 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-03-08 00:10:44.220728 | orchestrator | changed: [testbed-manager] 2026-03-08 00:10:44.220841 | orchestrator | 2026-03-08 00:10:44.220858 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-03-08 00:10:45.970694 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-03-08 00:10:45.970791 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-03-08 00:10:45.970806 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-03-08 00:10:45.970819 | orchestrator | 2026-03-08 00:10:45.970831 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-03-08 00:10:46.029818 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:10:46.029882 | orchestrator | 2026-03-08 00:10:46.029899 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-03-08 00:10:46.111407 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:10:46.111484 | orchestrator | 2026-03-08 00:10:46.111509 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-03-08 00:10:46.664363 | orchestrator | changed: [testbed-manager] 2026-03-08 00:10:46.664415 | orchestrator | 2026-03-08 00:10:46.664427 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-03-08 00:10:46.729701 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:10:46.729760 | orchestrator | 2026-03-08 00:10:46.729774 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-03-08 00:10:47.581707 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-08 00:10:47.581796 | orchestrator | changed: [testbed-manager] 2026-03-08 00:10:47.581824 | orchestrator | 2026-03-08 00:10:47.581846 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-03-08 00:10:47.612411 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:10:47.612472 | orchestrator | 2026-03-08 00:10:47.612487 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-03-08 00:10:47.654849 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:10:47.654904 | orchestrator | 2026-03-08 00:10:47.654918 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-03-08 00:10:47.697815 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:10:47.697879 | orchestrator | 2026-03-08 00:10:47.697897 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-03-08 00:10:47.780813 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:10:47.780922 | orchestrator | 2026-03-08 00:10:47.780942 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-03-08 00:10:48.495247 | orchestrator | ok: [testbed-manager] 2026-03-08 00:10:48.495332 | orchestrator | 2026-03-08 00:10:48.495352 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-03-08 00:10:48.495373 | orchestrator | 2026-03-08 00:10:48.495392 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-08 00:10:49.836736 | orchestrator | ok: [testbed-manager] 2026-03-08 00:10:49.836783 | orchestrator | 2026-03-08 00:10:49.836790 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-03-08 00:10:50.773441 | orchestrator | changed: [testbed-manager] 2026-03-08 00:10:50.773518 | orchestrator | 2026-03-08 00:10:50.773532 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:10:50.773545 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=14 rescued=0 ignored=0 2026-03-08 00:10:50.773556 | orchestrator | 2026-03-08 00:10:51.130717 | orchestrator | ok: Runtime: 0:06:16.717645 2026-03-08 00:10:51.147290 | 2026-03-08 00:10:51.147591 | TASK [Point out that the log in on the manager is now possible] 2026-03-08 00:10:51.192691 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-03-08 00:10:51.201442 | 2026-03-08 00:10:51.201569 | TASK [Point out that the following task takes some time and does not give any output] 2026-03-08 00:10:51.237198 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-03-08 00:10:51.246801 | 2026-03-08 00:10:51.247058 | TASK [Run manager part 1 + 2] 2026-03-08 00:10:52.891623 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-08 00:10:52.953877 | orchestrator | 2026-03-08 00:10:52.953930 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-03-08 00:10:52.953938 | orchestrator | 2026-03-08 00:10:52.953952 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-08 00:10:55.561131 | orchestrator | ok: [testbed-manager] 2026-03-08 00:10:55.561247 | orchestrator | 2026-03-08 00:10:55.561328 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-03-08 00:10:55.605625 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:10:55.605687 | orchestrator | 2026-03-08 00:10:55.605699 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-03-08 00:10:55.656006 | orchestrator | ok: [testbed-manager] 2026-03-08 00:10:55.656094 | orchestrator | 2026-03-08 00:10:55.656112 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-08 00:10:55.709229 | orchestrator | ok: [testbed-manager] 2026-03-08 00:10:55.709280 | orchestrator | 2026-03-08 00:10:55.709287 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-08 00:10:55.796281 | orchestrator | ok: [testbed-manager] 2026-03-08 00:10:55.796335 | orchestrator | 2026-03-08 00:10:55.796343 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-08 00:10:55.859676 | orchestrator | ok: [testbed-manager] 2026-03-08 00:10:55.859772 | orchestrator | 2026-03-08 00:10:55.859782 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-08 00:10:55.911437 | orchestrator | included: /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-03-08 00:10:55.911521 | orchestrator | 2026-03-08 00:10:55.911536 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-08 00:10:56.639619 | orchestrator | ok: [testbed-manager] 2026-03-08 00:10:56.639726 | orchestrator | 2026-03-08 00:10:56.639746 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-08 00:10:56.696080 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:10:56.696146 | orchestrator | 2026-03-08 00:10:56.696156 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-08 00:10:58.080082 | orchestrator | changed: [testbed-manager] 2026-03-08 00:10:58.080318 | orchestrator | 2026-03-08 00:10:58.080340 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-08 00:10:58.639805 | orchestrator | ok: [testbed-manager] 2026-03-08 00:10:58.639884 | orchestrator | 2026-03-08 00:10:58.639899 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-08 00:10:59.771363 | orchestrator | changed: [testbed-manager] 2026-03-08 00:10:59.771423 | orchestrator | 2026-03-08 00:10:59.771438 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-08 00:11:14.414577 | orchestrator | changed: [testbed-manager] 2026-03-08 00:11:14.414640 | orchestrator | 2026-03-08 00:11:14.414648 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-03-08 00:11:15.100919 | orchestrator | ok: [testbed-manager] 2026-03-08 00:11:15.100982 | orchestrator | 2026-03-08 00:11:15.100991 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-03-08 00:11:15.150115 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:11:15.150159 | orchestrator | 2026-03-08 00:11:15.150172 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-03-08 00:11:16.108821 | orchestrator | changed: [testbed-manager] 2026-03-08 00:11:16.108928 | orchestrator | 2026-03-08 00:11:16.108990 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-03-08 00:11:17.062094 | orchestrator | changed: [testbed-manager] 2026-03-08 00:11:17.062184 | orchestrator | 2026-03-08 00:11:17.062201 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-03-08 00:11:17.661479 | orchestrator | changed: [testbed-manager] 2026-03-08 00:11:17.661560 | orchestrator | 2026-03-08 00:11:17.661577 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-03-08 00:11:17.702884 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-03-08 00:11:17.703029 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-03-08 00:11:17.703048 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-03-08 00:11:17.703061 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-03-08 00:11:20.631069 | orchestrator | changed: [testbed-manager] 2026-03-08 00:11:20.631117 | orchestrator | 2026-03-08 00:11:20.631123 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-03-08 00:11:29.522578 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-03-08 00:11:29.522698 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-03-08 00:11:29.522718 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-03-08 00:11:29.522731 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-03-08 00:11:29.522752 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-03-08 00:11:29.522764 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-03-08 00:11:29.522776 | orchestrator | 2026-03-08 00:11:29.522788 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-03-08 00:11:30.614215 | orchestrator | changed: [testbed-manager] 2026-03-08 00:11:30.614303 | orchestrator | 2026-03-08 00:11:30.614321 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2026-03-08 00:11:30.656197 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:11:30.656289 | orchestrator | 2026-03-08 00:11:30.656305 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-03-08 00:11:33.756636 | orchestrator | changed: [testbed-manager] 2026-03-08 00:11:33.756706 | orchestrator | 2026-03-08 00:11:33.756719 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-03-08 00:11:33.800905 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:11:33.800969 | orchestrator | 2026-03-08 00:11:33.800975 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-03-08 00:13:08.471375 | orchestrator | changed: [testbed-manager] 2026-03-08 00:13:08.471483 | orchestrator | 2026-03-08 00:13:08.471502 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-08 00:13:09.590001 | orchestrator | ok: [testbed-manager] 2026-03-08 00:13:09.590060 | orchestrator | 2026-03-08 00:13:09.590068 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:13:09.590074 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2026-03-08 00:13:09.590080 | orchestrator | 2026-03-08 00:13:09.882069 | orchestrator | ok: Runtime: 0:02:18.163578 2026-03-08 00:13:09.896061 | 2026-03-08 00:13:09.896192 | TASK [Reboot manager] 2026-03-08 00:13:11.433663 | orchestrator | ok: Runtime: 0:00:00.984127 2026-03-08 00:13:11.450458 | 2026-03-08 00:13:11.450619 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-03-08 00:13:27.871049 | orchestrator | ok 2026-03-08 00:13:27.880942 | 2026-03-08 00:13:27.881079 | TASK [Wait a little longer for the manager so that everything is ready] 2026-03-08 00:14:27.926984 | orchestrator | ok 2026-03-08 00:14:27.936035 | 2026-03-08 00:14:27.936157 | TASK [Deploy manager + bootstrap nodes] 2026-03-08 00:14:31.217124 | orchestrator | 2026-03-08 00:14:31.217272 | orchestrator | # DEPLOY MANAGER 2026-03-08 00:14:31.217283 | orchestrator | 2026-03-08 00:14:31.217288 | orchestrator | + set -e 2026-03-08 00:14:31.217292 | orchestrator | + echo 2026-03-08 00:14:31.217297 | orchestrator | + echo '# DEPLOY MANAGER' 2026-03-08 00:14:31.217304 | orchestrator | + echo 2026-03-08 00:14:31.217324 | orchestrator | + cat /opt/manager-vars.sh 2026-03-08 00:14:31.220606 | orchestrator | export NUMBER_OF_NODES=6 2026-03-08 00:14:31.220622 | orchestrator | 2026-03-08 00:14:31.220627 | orchestrator | export CEPH_VERSION=reef 2026-03-08 00:14:31.220632 | orchestrator | export CONFIGURATION_VERSION=main 2026-03-08 00:14:31.220637 | orchestrator | export MANAGER_VERSION=latest 2026-03-08 00:14:31.220647 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-03-08 00:14:31.220651 | orchestrator | 2026-03-08 00:14:31.220658 | orchestrator | export ARA=false 2026-03-08 00:14:31.220662 | orchestrator | export DEPLOY_MODE=manager 2026-03-08 00:14:31.220669 | orchestrator | export TEMPEST=true 2026-03-08 00:14:31.220673 | orchestrator | export IS_ZUUL=true 2026-03-08 00:14:31.220677 | orchestrator | 2026-03-08 00:14:31.220684 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.206 2026-03-08 00:14:31.220688 | orchestrator | export EXTERNAL_API=false 2026-03-08 00:14:31.220706 | orchestrator | 2026-03-08 00:14:31.220710 | orchestrator | export IMAGE_USER=ubuntu 2026-03-08 00:14:31.220716 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-03-08 00:14:31.220720 | orchestrator | 2026-03-08 00:14:31.220724 | orchestrator | export CEPH_STACK=ceph-ansible 2026-03-08 00:14:31.220923 | orchestrator | 2026-03-08 00:14:31.220929 | orchestrator | + echo 2026-03-08 00:14:31.220934 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-08 00:14:31.221635 | orchestrator | ++ export INTERACTIVE=false 2026-03-08 00:14:31.221642 | orchestrator | ++ INTERACTIVE=false 2026-03-08 00:14:31.221719 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-08 00:14:31.221726 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-08 00:14:31.221886 | orchestrator | + source /opt/manager-vars.sh 2026-03-08 00:14:31.221892 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-08 00:14:31.221896 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-08 00:14:31.221900 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-08 00:14:31.221904 | orchestrator | ++ CEPH_VERSION=reef 2026-03-08 00:14:31.221929 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-08 00:14:31.221934 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-08 00:14:31.221938 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-08 00:14:31.221942 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-08 00:14:31.221982 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-08 00:14:31.221993 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-08 00:14:31.221997 | orchestrator | ++ export ARA=false 2026-03-08 00:14:31.222001 | orchestrator | ++ ARA=false 2026-03-08 00:14:31.222005 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-08 00:14:31.222009 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-08 00:14:31.222025 | orchestrator | ++ export TEMPEST=true 2026-03-08 00:14:31.222029 | orchestrator | ++ TEMPEST=true 2026-03-08 00:14:31.222033 | orchestrator | ++ export IS_ZUUL=true 2026-03-08 00:14:31.222037 | orchestrator | ++ IS_ZUUL=true 2026-03-08 00:14:31.222040 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.206 2026-03-08 00:14:31.222046 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.206 2026-03-08 00:14:31.222050 | orchestrator | ++ export EXTERNAL_API=false 2026-03-08 00:14:31.222065 | orchestrator | ++ EXTERNAL_API=false 2026-03-08 00:14:31.222070 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-08 00:14:31.222074 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-08 00:14:31.222078 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-08 00:14:31.222082 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-08 00:14:31.222087 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-08 00:14:31.222092 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-08 00:14:31.222095 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-03-08 00:14:31.273286 | orchestrator | + docker version 2026-03-08 00:14:31.395196 | orchestrator | Client: Docker Engine - Community 2026-03-08 00:14:31.395244 | orchestrator | Version: 27.5.1 2026-03-08 00:14:31.395250 | orchestrator | API version: 1.47 2026-03-08 00:14:31.395256 | orchestrator | Go version: go1.22.11 2026-03-08 00:14:31.395260 | orchestrator | Git commit: 9f9e405 2026-03-08 00:14:31.395264 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-03-08 00:14:31.395270 | orchestrator | OS/Arch: linux/amd64 2026-03-08 00:14:31.395274 | orchestrator | Context: default 2026-03-08 00:14:31.395277 | orchestrator | 2026-03-08 00:14:31.395281 | orchestrator | Server: Docker Engine - Community 2026-03-08 00:14:31.395285 | orchestrator | Engine: 2026-03-08 00:14:31.395289 | orchestrator | Version: 27.5.1 2026-03-08 00:14:31.395293 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-03-08 00:14:31.395317 | orchestrator | Go version: go1.22.11 2026-03-08 00:14:31.395321 | orchestrator | Git commit: 4c9b3b0 2026-03-08 00:14:31.395325 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-03-08 00:14:31.395329 | orchestrator | OS/Arch: linux/amd64 2026-03-08 00:14:31.395333 | orchestrator | Experimental: false 2026-03-08 00:14:31.395336 | orchestrator | containerd: 2026-03-08 00:14:31.395340 | orchestrator | Version: v2.2.1 2026-03-08 00:14:31.395344 | orchestrator | GitCommit: dea7da592f5d1d2b7755e3a161be07f43fad8f75 2026-03-08 00:14:31.395348 | orchestrator | runc: 2026-03-08 00:14:31.395352 | orchestrator | Version: 1.3.4 2026-03-08 00:14:31.395356 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-03-08 00:14:31.395360 | orchestrator | docker-init: 2026-03-08 00:14:31.395364 | orchestrator | Version: 0.19.0 2026-03-08 00:14:31.395368 | orchestrator | GitCommit: de40ad0 2026-03-08 00:14:31.398008 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-03-08 00:14:31.407341 | orchestrator | + set -e 2026-03-08 00:14:31.407351 | orchestrator | + source /opt/manager-vars.sh 2026-03-08 00:14:31.407403 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-08 00:14:31.407408 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-08 00:14:31.407412 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-08 00:14:31.407416 | orchestrator | ++ CEPH_VERSION=reef 2026-03-08 00:14:31.407420 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-08 00:14:31.407424 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-08 00:14:31.407429 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-08 00:14:31.407433 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-08 00:14:31.407437 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-08 00:14:31.407440 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-08 00:14:31.407444 | orchestrator | ++ export ARA=false 2026-03-08 00:14:31.407448 | orchestrator | ++ ARA=false 2026-03-08 00:14:31.407452 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-08 00:14:31.407456 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-08 00:14:31.407460 | orchestrator | ++ export TEMPEST=true 2026-03-08 00:14:31.407463 | orchestrator | ++ TEMPEST=true 2026-03-08 00:14:31.407467 | orchestrator | ++ export IS_ZUUL=true 2026-03-08 00:14:31.407471 | orchestrator | ++ IS_ZUUL=true 2026-03-08 00:14:31.407475 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.206 2026-03-08 00:14:31.407479 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.206 2026-03-08 00:14:31.407483 | orchestrator | ++ export EXTERNAL_API=false 2026-03-08 00:14:31.407486 | orchestrator | ++ EXTERNAL_API=false 2026-03-08 00:14:31.407490 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-08 00:14:31.407494 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-08 00:14:31.407498 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-08 00:14:31.407501 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-08 00:14:31.407505 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-08 00:14:31.407509 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-08 00:14:31.407513 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-08 00:14:31.407517 | orchestrator | ++ export INTERACTIVE=false 2026-03-08 00:14:31.407520 | orchestrator | ++ INTERACTIVE=false 2026-03-08 00:14:31.407524 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-08 00:14:31.407530 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-08 00:14:31.407535 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-08 00:14:31.407651 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-08 00:14:31.407657 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2026-03-08 00:14:31.414540 | orchestrator | + set -e 2026-03-08 00:14:31.414560 | orchestrator | + VERSION=reef 2026-03-08 00:14:31.415566 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2026-03-08 00:14:31.420522 | orchestrator | + [[ -n ceph_version: reef ]] 2026-03-08 00:14:31.420544 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2026-03-08 00:14:31.425089 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2026-03-08 00:14:31.431650 | orchestrator | + set -e 2026-03-08 00:14:31.431679 | orchestrator | + VERSION=2024.2 2026-03-08 00:14:31.432855 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2026-03-08 00:14:31.436021 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2026-03-08 00:14:31.436108 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2026-03-08 00:14:31.440668 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-03-08 00:14:31.441460 | orchestrator | ++ semver latest 7.0.0 2026-03-08 00:14:31.497806 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-08 00:14:31.497867 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-08 00:14:31.497877 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-03-08 00:14:31.498831 | orchestrator | ++ semver latest 10.0.0-0 2026-03-08 00:14:31.552460 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-08 00:14:31.553200 | orchestrator | ++ semver 2024.2 2025.1 2026-03-08 00:14:31.607645 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-08 00:14:31.607724 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-03-08 00:14:31.692423 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-08 00:14:31.693534 | orchestrator | + source /opt/venv/bin/activate 2026-03-08 00:14:31.694575 | orchestrator | ++ deactivate nondestructive 2026-03-08 00:14:31.694587 | orchestrator | ++ '[' -n '' ']' 2026-03-08 00:14:31.694591 | orchestrator | ++ '[' -n '' ']' 2026-03-08 00:14:31.694638 | orchestrator | ++ hash -r 2026-03-08 00:14:31.694644 | orchestrator | ++ '[' -n '' ']' 2026-03-08 00:14:31.694648 | orchestrator | ++ unset VIRTUAL_ENV 2026-03-08 00:14:31.694860 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-03-08 00:14:31.694953 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-03-08 00:14:31.694980 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-03-08 00:14:31.694992 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-03-08 00:14:31.695002 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-03-08 00:14:31.695012 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-03-08 00:14:31.695022 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-08 00:14:31.695033 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-08 00:14:31.695043 | orchestrator | ++ export PATH 2026-03-08 00:14:31.695057 | orchestrator | ++ '[' -n '' ']' 2026-03-08 00:14:31.695066 | orchestrator | ++ '[' -z '' ']' 2026-03-08 00:14:31.695076 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-03-08 00:14:31.695116 | orchestrator | ++ PS1='(venv) ' 2026-03-08 00:14:31.695127 | orchestrator | ++ export PS1 2026-03-08 00:14:31.695136 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-03-08 00:14:31.695147 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-03-08 00:14:31.695157 | orchestrator | ++ hash -r 2026-03-08 00:14:31.695263 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-03-08 00:14:32.882733 | orchestrator | 2026-03-08 00:14:32.882872 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-03-08 00:14:32.882898 | orchestrator | 2026-03-08 00:14:32.882918 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-08 00:14:33.428315 | orchestrator | ok: [testbed-manager] 2026-03-08 00:14:33.428393 | orchestrator | 2026-03-08 00:14:33.428401 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-03-08 00:14:34.368334 | orchestrator | changed: [testbed-manager] 2026-03-08 00:14:34.368439 | orchestrator | 2026-03-08 00:14:34.368454 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-03-08 00:14:34.368466 | orchestrator | 2026-03-08 00:14:34.368476 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-08 00:14:36.618993 | orchestrator | ok: [testbed-manager] 2026-03-08 00:14:36.619089 | orchestrator | 2026-03-08 00:14:36.619101 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-03-08 00:14:36.664380 | orchestrator | ok: [testbed-manager] 2026-03-08 00:14:36.664457 | orchestrator | 2026-03-08 00:14:36.664467 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-03-08 00:14:37.098366 | orchestrator | changed: [testbed-manager] 2026-03-08 00:14:37.098482 | orchestrator | 2026-03-08 00:14:37.098504 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-03-08 00:14:37.134392 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:14:37.134483 | orchestrator | 2026-03-08 00:14:37.134496 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-03-08 00:14:37.467021 | orchestrator | changed: [testbed-manager] 2026-03-08 00:14:37.467104 | orchestrator | 2026-03-08 00:14:37.467114 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-03-08 00:14:37.802592 | orchestrator | ok: [testbed-manager] 2026-03-08 00:14:37.802678 | orchestrator | 2026-03-08 00:14:37.802700 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-03-08 00:14:37.926706 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:14:37.926785 | orchestrator | 2026-03-08 00:14:37.926792 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-03-08 00:14:37.926797 | orchestrator | 2026-03-08 00:14:37.926801 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-08 00:14:39.583897 | orchestrator | ok: [testbed-manager] 2026-03-08 00:14:39.584050 | orchestrator | 2026-03-08 00:14:39.584077 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-03-08 00:14:39.697916 | orchestrator | included: osism.services.traefik for testbed-manager 2026-03-08 00:14:39.698056 | orchestrator | 2026-03-08 00:14:39.698075 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-03-08 00:14:39.755580 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-03-08 00:14:39.755702 | orchestrator | 2026-03-08 00:14:39.755717 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-03-08 00:14:40.813029 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-03-08 00:14:40.813141 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-03-08 00:14:40.813159 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-03-08 00:14:40.813171 | orchestrator | 2026-03-08 00:14:40.813183 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-03-08 00:14:42.542149 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-03-08 00:14:42.542269 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-03-08 00:14:42.542285 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-03-08 00:14:42.542298 | orchestrator | 2026-03-08 00:14:42.542311 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-03-08 00:14:43.151390 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-08 00:14:43.151472 | orchestrator | changed: [testbed-manager] 2026-03-08 00:14:43.151484 | orchestrator | 2026-03-08 00:14:43.151492 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-03-08 00:14:43.773740 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-08 00:14:43.773847 | orchestrator | changed: [testbed-manager] 2026-03-08 00:14:43.773863 | orchestrator | 2026-03-08 00:14:43.773876 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-03-08 00:14:43.827520 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:14:43.827634 | orchestrator | 2026-03-08 00:14:43.827656 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-03-08 00:14:44.179506 | orchestrator | ok: [testbed-manager] 2026-03-08 00:14:44.179655 | orchestrator | 2026-03-08 00:14:44.179673 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-03-08 00:14:44.235049 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-03-08 00:14:44.235204 | orchestrator | 2026-03-08 00:14:44.235232 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-03-08 00:14:45.280026 | orchestrator | changed: [testbed-manager] 2026-03-08 00:14:45.280142 | orchestrator | 2026-03-08 00:14:45.280159 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-03-08 00:14:46.082573 | orchestrator | changed: [testbed-manager] 2026-03-08 00:14:46.082728 | orchestrator | 2026-03-08 00:14:46.082753 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-03-08 00:15:00.098375 | orchestrator | changed: [testbed-manager] 2026-03-08 00:15:00.098500 | orchestrator | 2026-03-08 00:15:00.098539 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-03-08 00:15:00.148381 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:15:00.148491 | orchestrator | 2026-03-08 00:15:00.148513 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-03-08 00:15:00.148532 | orchestrator | 2026-03-08 00:15:00.148549 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-08 00:15:01.825471 | orchestrator | ok: [testbed-manager] 2026-03-08 00:15:01.825578 | orchestrator | 2026-03-08 00:15:01.825638 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-03-08 00:15:01.930772 | orchestrator | included: osism.services.manager for testbed-manager 2026-03-08 00:15:01.930894 | orchestrator | 2026-03-08 00:15:01.930921 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-03-08 00:15:01.981165 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-03-08 00:15:01.981260 | orchestrator | 2026-03-08 00:15:01.981275 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-03-08 00:15:04.591747 | orchestrator | ok: [testbed-manager] 2026-03-08 00:15:04.591856 | orchestrator | 2026-03-08 00:15:04.591874 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-03-08 00:15:04.637807 | orchestrator | ok: [testbed-manager] 2026-03-08 00:15:04.637917 | orchestrator | 2026-03-08 00:15:04.637934 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-03-08 00:15:04.770134 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-03-08 00:15:04.770233 | orchestrator | 2026-03-08 00:15:04.770251 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-03-08 00:15:07.514209 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-03-08 00:15:07.514303 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-03-08 00:15:07.514318 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-03-08 00:15:07.514330 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-03-08 00:15:07.514341 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-03-08 00:15:07.514352 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-03-08 00:15:07.514363 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-03-08 00:15:07.514374 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-03-08 00:15:07.514385 | orchestrator | 2026-03-08 00:15:07.514398 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-03-08 00:15:08.150237 | orchestrator | changed: [testbed-manager] 2026-03-08 00:15:08.150322 | orchestrator | 2026-03-08 00:15:08.150343 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-03-08 00:15:08.778633 | orchestrator | changed: [testbed-manager] 2026-03-08 00:15:08.778771 | orchestrator | 2026-03-08 00:15:08.778795 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-03-08 00:15:08.849622 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-03-08 00:15:08.849765 | orchestrator | 2026-03-08 00:15:08.849792 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-03-08 00:15:10.013156 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-03-08 00:15:10.013250 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-03-08 00:15:10.013269 | orchestrator | 2026-03-08 00:15:10.013287 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-03-08 00:15:10.611281 | orchestrator | changed: [testbed-manager] 2026-03-08 00:15:10.611383 | orchestrator | 2026-03-08 00:15:10.611402 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-03-08 00:15:10.665560 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:15:10.665651 | orchestrator | 2026-03-08 00:15:10.665708 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-03-08 00:15:12.177498 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-03-08 00:15:12.177615 | orchestrator | 2026-03-08 00:15:12.177630 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-03-08 00:15:12.768778 | orchestrator | changed: [testbed-manager] 2026-03-08 00:15:12.768885 | orchestrator | 2026-03-08 00:15:12.768901 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-03-08 00:15:12.815811 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-03-08 00:15:12.815930 | orchestrator | 2026-03-08 00:15:12.815945 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-03-08 00:15:14.124089 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-08 00:15:14.124196 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-08 00:15:14.124211 | orchestrator | changed: [testbed-manager] 2026-03-08 00:15:14.124224 | orchestrator | 2026-03-08 00:15:14.124236 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-03-08 00:15:14.720174 | orchestrator | changed: [testbed-manager] 2026-03-08 00:15:14.720280 | orchestrator | 2026-03-08 00:15:14.720295 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-03-08 00:15:14.766843 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:15:14.766940 | orchestrator | 2026-03-08 00:15:14.766955 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-03-08 00:15:14.860857 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-03-08 00:15:14.860967 | orchestrator | 2026-03-08 00:15:14.860988 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-03-08 00:15:15.350286 | orchestrator | changed: [testbed-manager] 2026-03-08 00:15:15.350395 | orchestrator | 2026-03-08 00:15:15.350438 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-03-08 00:15:15.733580 | orchestrator | changed: [testbed-manager] 2026-03-08 00:15:15.733752 | orchestrator | 2026-03-08 00:15:15.733770 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-03-08 00:15:16.924908 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-03-08 00:15:16.925017 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-03-08 00:15:16.925040 | orchestrator | 2026-03-08 00:15:16.925060 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-03-08 00:15:17.542001 | orchestrator | changed: [testbed-manager] 2026-03-08 00:15:17.542164 | orchestrator | 2026-03-08 00:15:17.542180 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-03-08 00:15:17.915975 | orchestrator | ok: [testbed-manager] 2026-03-08 00:15:17.916080 | orchestrator | 2026-03-08 00:15:17.916097 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-03-08 00:15:18.269909 | orchestrator | changed: [testbed-manager] 2026-03-08 00:15:18.270007 | orchestrator | 2026-03-08 00:15:18.270075 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-03-08 00:15:18.319282 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:15:18.319411 | orchestrator | 2026-03-08 00:15:18.319436 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-03-08 00:15:19.454118 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-03-08 00:15:19.454214 | orchestrator | 2026-03-08 00:15:19.454229 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-03-08 00:15:19.499109 | orchestrator | ok: [testbed-manager] 2026-03-08 00:15:19.499229 | orchestrator | 2026-03-08 00:15:19.499252 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-03-08 00:15:21.417941 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-03-08 00:15:21.418122 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-03-08 00:15:21.418140 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-03-08 00:15:21.418151 | orchestrator | 2026-03-08 00:15:21.418961 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-03-08 00:15:22.090810 | orchestrator | changed: [testbed-manager] 2026-03-08 00:15:22.090915 | orchestrator | 2026-03-08 00:15:22.090932 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-03-08 00:15:22.770283 | orchestrator | changed: [testbed-manager] 2026-03-08 00:15:22.770386 | orchestrator | 2026-03-08 00:15:22.770403 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-03-08 00:15:23.438481 | orchestrator | changed: [testbed-manager] 2026-03-08 00:15:23.438595 | orchestrator | 2026-03-08 00:15:23.438615 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-03-08 00:15:23.512719 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-03-08 00:15:23.512797 | orchestrator | 2026-03-08 00:15:23.512805 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-03-08 00:15:23.555887 | orchestrator | ok: [testbed-manager] 2026-03-08 00:15:23.555989 | orchestrator | 2026-03-08 00:15:23.556004 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-03-08 00:15:24.222416 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-03-08 00:15:24.222522 | orchestrator | 2026-03-08 00:15:24.222538 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-03-08 00:15:24.307359 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-03-08 00:15:24.307467 | orchestrator | 2026-03-08 00:15:24.307483 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-03-08 00:15:24.993182 | orchestrator | changed: [testbed-manager] 2026-03-08 00:15:24.993291 | orchestrator | 2026-03-08 00:15:24.993308 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-03-08 00:15:25.551369 | orchestrator | ok: [testbed-manager] 2026-03-08 00:15:25.551471 | orchestrator | 2026-03-08 00:15:25.551485 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-03-08 00:15:25.597510 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:15:25.597610 | orchestrator | 2026-03-08 00:15:25.597626 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-03-08 00:15:25.638250 | orchestrator | ok: [testbed-manager] 2026-03-08 00:15:25.638344 | orchestrator | 2026-03-08 00:15:25.638359 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-03-08 00:15:26.405130 | orchestrator | changed: [testbed-manager] 2026-03-08 00:15:26.405239 | orchestrator | 2026-03-08 00:15:26.405256 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-03-08 00:16:40.728851 | orchestrator | changed: [testbed-manager] 2026-03-08 00:16:40.728960 | orchestrator | 2026-03-08 00:16:40.728975 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-03-08 00:16:41.662554 | orchestrator | ok: [testbed-manager] 2026-03-08 00:16:41.662691 | orchestrator | 2026-03-08 00:16:41.662704 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-03-08 00:16:41.708852 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:16:41.708959 | orchestrator | 2026-03-08 00:16:41.708980 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-03-08 00:16:44.014974 | orchestrator | changed: [testbed-manager] 2026-03-08 00:16:44.015082 | orchestrator | 2026-03-08 00:16:44.015098 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-03-08 00:16:44.094894 | orchestrator | ok: [testbed-manager] 2026-03-08 00:16:44.094989 | orchestrator | 2026-03-08 00:16:44.095024 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-03-08 00:16:44.095037 | orchestrator | 2026-03-08 00:16:44.095049 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-03-08 00:16:44.157227 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:16:44.157347 | orchestrator | 2026-03-08 00:16:44.157372 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-03-08 00:17:44.211086 | orchestrator | Pausing for 60 seconds 2026-03-08 00:17:44.211205 | orchestrator | changed: [testbed-manager] 2026-03-08 00:17:44.211221 | orchestrator | 2026-03-08 00:17:44.211236 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-03-08 00:17:48.295330 | orchestrator | changed: [testbed-manager] 2026-03-08 00:17:48.295421 | orchestrator | 2026-03-08 00:17:48.295434 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-03-08 00:18:29.713015 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-03-08 00:18:29.713135 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-03-08 00:18:29.713152 | orchestrator | changed: [testbed-manager] 2026-03-08 00:18:29.713192 | orchestrator | 2026-03-08 00:18:29.713205 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-03-08 00:18:38.790807 | orchestrator | changed: [testbed-manager] 2026-03-08 00:18:38.790921 | orchestrator | 2026-03-08 00:18:38.790938 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-03-08 00:18:38.871819 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-03-08 00:18:38.871919 | orchestrator | 2026-03-08 00:18:38.871934 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-03-08 00:18:38.871947 | orchestrator | 2026-03-08 00:18:38.871958 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-03-08 00:18:38.919898 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:18:38.920001 | orchestrator | 2026-03-08 00:18:38.920018 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-03-08 00:18:38.989974 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-03-08 00:18:38.990165 | orchestrator | 2026-03-08 00:18:38.990193 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-03-08 00:18:39.689433 | orchestrator | changed: [testbed-manager] 2026-03-08 00:18:39.689536 | orchestrator | 2026-03-08 00:18:39.689553 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-03-08 00:18:42.614117 | orchestrator | ok: [testbed-manager] 2026-03-08 00:18:42.614244 | orchestrator | 2026-03-08 00:18:42.614259 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-03-08 00:18:42.682296 | orchestrator | ok: [testbed-manager] => { 2026-03-08 00:18:42.682393 | orchestrator | "version_check_result.stdout_lines": [ 2026-03-08 00:18:42.682408 | orchestrator | "=== OSISM Container Version Check ===", 2026-03-08 00:18:42.682420 | orchestrator | "Checking running containers against expected versions...", 2026-03-08 00:18:42.682432 | orchestrator | "", 2026-03-08 00:18:42.682447 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-03-08 00:18:42.682458 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:latest", 2026-03-08 00:18:42.682469 | orchestrator | " Enabled: true", 2026-03-08 00:18:42.682481 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:latest", 2026-03-08 00:18:42.682492 | orchestrator | " Status: ✅ MATCH", 2026-03-08 00:18:42.682503 | orchestrator | "", 2026-03-08 00:18:42.682514 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-03-08 00:18:42.682525 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:latest", 2026-03-08 00:18:42.682536 | orchestrator | " Enabled: true", 2026-03-08 00:18:42.682547 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:latest", 2026-03-08 00:18:42.682557 | orchestrator | " Status: ✅ MATCH", 2026-03-08 00:18:42.682568 | orchestrator | "", 2026-03-08 00:18:42.682579 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-03-08 00:18:42.682590 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:latest", 2026-03-08 00:18:42.682599 | orchestrator | " Enabled: true", 2026-03-08 00:18:42.682609 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:latest", 2026-03-08 00:18:42.682619 | orchestrator | " Status: ✅ MATCH", 2026-03-08 00:18:42.682628 | orchestrator | "", 2026-03-08 00:18:42.682638 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-03-08 00:18:42.682690 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:reef", 2026-03-08 00:18:42.682703 | orchestrator | " Enabled: true", 2026-03-08 00:18:42.682713 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:reef", 2026-03-08 00:18:42.682723 | orchestrator | " Status: ✅ MATCH", 2026-03-08 00:18:42.682733 | orchestrator | "", 2026-03-08 00:18:42.682742 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-03-08 00:18:42.682752 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:2024.2", 2026-03-08 00:18:42.682787 | orchestrator | " Enabled: true", 2026-03-08 00:18:42.682797 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:2024.2", 2026-03-08 00:18:42.682808 | orchestrator | " Status: ✅ MATCH", 2026-03-08 00:18:42.682818 | orchestrator | "", 2026-03-08 00:18:42.682830 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-03-08 00:18:42.682847 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-08 00:18:42.682864 | orchestrator | " Enabled: true", 2026-03-08 00:18:42.682881 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-08 00:18:42.682899 | orchestrator | " Status: ✅ MATCH", 2026-03-08 00:18:42.682915 | orchestrator | "", 2026-03-08 00:18:42.682930 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-03-08 00:18:42.682948 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-03-08 00:18:42.682966 | orchestrator | " Enabled: true", 2026-03-08 00:18:42.682983 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-03-08 00:18:42.683003 | orchestrator | " Status: ✅ MATCH", 2026-03-08 00:18:42.683022 | orchestrator | "", 2026-03-08 00:18:42.683036 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-03-08 00:18:42.683045 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-03-08 00:18:42.683055 | orchestrator | " Enabled: true", 2026-03-08 00:18:42.683064 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-03-08 00:18:42.683074 | orchestrator | " Status: ✅ MATCH", 2026-03-08 00:18:42.683083 | orchestrator | "", 2026-03-08 00:18:42.683102 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-03-08 00:18:42.683112 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:latest", 2026-03-08 00:18:42.683127 | orchestrator | " Enabled: true", 2026-03-08 00:18:42.683137 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:latest", 2026-03-08 00:18:42.683147 | orchestrator | " Status: ✅ MATCH", 2026-03-08 00:18:42.683156 | orchestrator | "", 2026-03-08 00:18:42.683166 | orchestrator | "Checking service: redis (Redis Cache)", 2026-03-08 00:18:42.683175 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-03-08 00:18:42.683185 | orchestrator | " Enabled: true", 2026-03-08 00:18:42.683194 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-03-08 00:18:42.683204 | orchestrator | " Status: ✅ MATCH", 2026-03-08 00:18:42.683213 | orchestrator | "", 2026-03-08 00:18:42.683222 | orchestrator | "Checking service: api (OSISM API Service)", 2026-03-08 00:18:42.683232 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-08 00:18:42.683241 | orchestrator | " Enabled: true", 2026-03-08 00:18:42.683250 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-08 00:18:42.683260 | orchestrator | " Status: ✅ MATCH", 2026-03-08 00:18:42.683269 | orchestrator | "", 2026-03-08 00:18:42.683279 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-03-08 00:18:42.683288 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-08 00:18:42.683297 | orchestrator | " Enabled: true", 2026-03-08 00:18:42.683307 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-08 00:18:42.683316 | orchestrator | " Status: ✅ MATCH", 2026-03-08 00:18:42.683325 | orchestrator | "", 2026-03-08 00:18:42.683335 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-03-08 00:18:42.683344 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-08 00:18:42.683353 | orchestrator | " Enabled: true", 2026-03-08 00:18:42.683363 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-08 00:18:42.683372 | orchestrator | " Status: ✅ MATCH", 2026-03-08 00:18:42.683381 | orchestrator | "", 2026-03-08 00:18:42.683391 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-03-08 00:18:42.683400 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-08 00:18:42.683410 | orchestrator | " Enabled: true", 2026-03-08 00:18:42.683419 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-08 00:18:42.683437 | orchestrator | " Status: ✅ MATCH", 2026-03-08 00:18:42.683447 | orchestrator | "", 2026-03-08 00:18:42.683456 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-03-08 00:18:42.683484 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-08 00:18:42.683494 | orchestrator | " Enabled: true", 2026-03-08 00:18:42.683504 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-08 00:18:42.683513 | orchestrator | " Status: ✅ MATCH", 2026-03-08 00:18:42.683523 | orchestrator | "", 2026-03-08 00:18:42.683532 | orchestrator | "=== Summary ===", 2026-03-08 00:18:42.683541 | orchestrator | "Errors (version mismatches): 0", 2026-03-08 00:18:42.683551 | orchestrator | "Warnings (expected containers not running): 0", 2026-03-08 00:18:42.683560 | orchestrator | "", 2026-03-08 00:18:42.683569 | orchestrator | "✅ All running containers match expected versions!" 2026-03-08 00:18:42.683579 | orchestrator | ] 2026-03-08 00:18:42.683589 | orchestrator | } 2026-03-08 00:18:42.683599 | orchestrator | 2026-03-08 00:18:42.683608 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-03-08 00:18:42.738909 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:18:42.739000 | orchestrator | 2026-03-08 00:18:42.739014 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:18:42.739028 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2026-03-08 00:18:42.739039 | orchestrator | 2026-03-08 00:18:42.827755 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-08 00:18:42.827873 | orchestrator | + deactivate 2026-03-08 00:18:42.827896 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-03-08 00:18:42.827921 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-08 00:18:42.827940 | orchestrator | + export PATH 2026-03-08 00:18:42.827959 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-03-08 00:18:42.827978 | orchestrator | + '[' -n '' ']' 2026-03-08 00:18:42.827997 | orchestrator | + hash -r 2026-03-08 00:18:42.828015 | orchestrator | + '[' -n '' ']' 2026-03-08 00:18:42.828033 | orchestrator | + unset VIRTUAL_ENV 2026-03-08 00:18:42.828052 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-03-08 00:18:42.828070 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-03-08 00:18:42.828089 | orchestrator | + unset -f deactivate 2026-03-08 00:18:42.828110 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-03-08 00:18:42.833959 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-08 00:18:42.834075 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-03-08 00:18:42.834090 | orchestrator | + local max_attempts=60 2026-03-08 00:18:42.834102 | orchestrator | + local name=ceph-ansible 2026-03-08 00:18:42.834113 | orchestrator | + local attempt_num=1 2026-03-08 00:18:42.835319 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-08 00:18:42.877993 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-08 00:18:42.878130 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-03-08 00:18:42.878146 | orchestrator | + local max_attempts=60 2026-03-08 00:18:42.878157 | orchestrator | + local name=kolla-ansible 2026-03-08 00:18:42.878167 | orchestrator | + local attempt_num=1 2026-03-08 00:18:42.878375 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-03-08 00:18:42.906934 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-08 00:18:42.907019 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-03-08 00:18:42.907033 | orchestrator | + local max_attempts=60 2026-03-08 00:18:42.907045 | orchestrator | + local name=osism-ansible 2026-03-08 00:18:42.907056 | orchestrator | + local attempt_num=1 2026-03-08 00:18:42.908008 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-03-08 00:18:42.946645 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-08 00:18:42.946765 | orchestrator | + [[ true == \t\r\u\e ]] 2026-03-08 00:18:42.946779 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-03-08 00:18:43.621579 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-03-08 00:18:43.825234 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-03-08 00:18:43.825370 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2026-03-08 00:18:43.825387 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2026-03-08 00:18:43.825399 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2026-03-08 00:18:43.825411 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2026-03-08 00:18:43.825422 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2026-03-08 00:18:43.825432 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2026-03-08 00:18:43.825443 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 55 seconds (healthy) 2026-03-08 00:18:43.825470 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2026-03-08 00:18:43.825482 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2026-03-08 00:18:43.825493 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2026-03-08 00:18:43.825504 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2026-03-08 00:18:43.825515 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2026-03-08 00:18:43.825525 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" frontend About a minute ago Up About a minute 192.168.16.5:3000->3000/tcp 2026-03-08 00:18:43.825536 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2026-03-08 00:18:43.825547 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2026-03-08 00:18:43.830178 | orchestrator | ++ semver latest 7.0.0 2026-03-08 00:18:43.870897 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-08 00:18:43.870971 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-08 00:18:43.870991 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-03-08 00:18:43.873974 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-03-08 00:18:55.756306 | orchestrator | 2026-03-08 00:18:55 | INFO  | Prepare task for execution of resolvconf. 2026-03-08 00:18:55.957987 | orchestrator | 2026-03-08 00:18:55 | INFO  | Task 00b04a45-9685-471b-828d-2eb7353390a8 (resolvconf) was prepared for execution. 2026-03-08 00:18:55.958118 | orchestrator | 2026-03-08 00:18:55 | INFO  | It takes a moment until task 00b04a45-9685-471b-828d-2eb7353390a8 (resolvconf) has been started and output is visible here. 2026-03-08 00:19:08.703989 | orchestrator | 2026-03-08 00:19:08.704108 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-03-08 00:19:08.704124 | orchestrator | 2026-03-08 00:19:08.704136 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-08 00:19:08.704148 | orchestrator | Sunday 08 March 2026 00:18:59 +0000 (0:00:00.102) 0:00:00.102 ********** 2026-03-08 00:19:08.704159 | orchestrator | ok: [testbed-manager] 2026-03-08 00:19:08.704172 | orchestrator | 2026-03-08 00:19:08.704183 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-03-08 00:19:08.704195 | orchestrator | Sunday 08 March 2026 00:19:02 +0000 (0:00:03.277) 0:00:03.379 ********** 2026-03-08 00:19:08.704206 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:19:08.704217 | orchestrator | 2026-03-08 00:19:08.704228 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-03-08 00:19:08.704239 | orchestrator | Sunday 08 March 2026 00:19:03 +0000 (0:00:00.064) 0:00:03.443 ********** 2026-03-08 00:19:08.704250 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-03-08 00:19:08.704262 | orchestrator | 2026-03-08 00:19:08.704273 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-03-08 00:19:08.704284 | orchestrator | Sunday 08 March 2026 00:19:03 +0000 (0:00:00.066) 0:00:03.510 ********** 2026-03-08 00:19:08.704306 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-03-08 00:19:08.704317 | orchestrator | 2026-03-08 00:19:08.704328 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-03-08 00:19:08.704339 | orchestrator | Sunday 08 March 2026 00:19:03 +0000 (0:00:00.063) 0:00:03.573 ********** 2026-03-08 00:19:08.704350 | orchestrator | ok: [testbed-manager] 2026-03-08 00:19:08.704360 | orchestrator | 2026-03-08 00:19:08.704371 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-03-08 00:19:08.704382 | orchestrator | Sunday 08 March 2026 00:19:04 +0000 (0:00:00.865) 0:00:04.438 ********** 2026-03-08 00:19:08.704393 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:19:08.704403 | orchestrator | 2026-03-08 00:19:08.704414 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-03-08 00:19:08.704425 | orchestrator | Sunday 08 March 2026 00:19:04 +0000 (0:00:00.060) 0:00:04.499 ********** 2026-03-08 00:19:08.704435 | orchestrator | ok: [testbed-manager] 2026-03-08 00:19:08.704446 | orchestrator | 2026-03-08 00:19:08.704457 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-03-08 00:19:08.704467 | orchestrator | Sunday 08 March 2026 00:19:04 +0000 (0:00:00.460) 0:00:04.960 ********** 2026-03-08 00:19:08.704478 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:19:08.704489 | orchestrator | 2026-03-08 00:19:08.704500 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-03-08 00:19:08.704512 | orchestrator | Sunday 08 March 2026 00:19:04 +0000 (0:00:00.085) 0:00:05.045 ********** 2026-03-08 00:19:08.704526 | orchestrator | changed: [testbed-manager] 2026-03-08 00:19:08.704539 | orchestrator | 2026-03-08 00:19:08.704552 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-03-08 00:19:08.704566 | orchestrator | Sunday 08 March 2026 00:19:05 +0000 (0:00:00.548) 0:00:05.594 ********** 2026-03-08 00:19:08.704579 | orchestrator | changed: [testbed-manager] 2026-03-08 00:19:08.704591 | orchestrator | 2026-03-08 00:19:08.704605 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-03-08 00:19:08.704617 | orchestrator | Sunday 08 March 2026 00:19:06 +0000 (0:00:01.073) 0:00:06.667 ********** 2026-03-08 00:19:08.704630 | orchestrator | ok: [testbed-manager] 2026-03-08 00:19:08.704690 | orchestrator | 2026-03-08 00:19:08.704724 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-03-08 00:19:08.704738 | orchestrator | Sunday 08 March 2026 00:19:07 +0000 (0:00:00.983) 0:00:07.651 ********** 2026-03-08 00:19:08.704751 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-03-08 00:19:08.704764 | orchestrator | 2026-03-08 00:19:08.704777 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-03-08 00:19:08.704789 | orchestrator | Sunday 08 March 2026 00:19:07 +0000 (0:00:00.073) 0:00:07.725 ********** 2026-03-08 00:19:08.704802 | orchestrator | changed: [testbed-manager] 2026-03-08 00:19:08.704816 | orchestrator | 2026-03-08 00:19:08.704829 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:19:08.704844 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-08 00:19:08.704858 | orchestrator | 2026-03-08 00:19:08.704872 | orchestrator | 2026-03-08 00:19:08.704883 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 00:19:08.704894 | orchestrator | Sunday 08 March 2026 00:19:08 +0000 (0:00:01.154) 0:00:08.879 ********** 2026-03-08 00:19:08.704904 | orchestrator | =============================================================================== 2026-03-08 00:19:08.704915 | orchestrator | Gathering Facts --------------------------------------------------------- 3.28s 2026-03-08 00:19:08.704926 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.15s 2026-03-08 00:19:08.704937 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.07s 2026-03-08 00:19:08.704947 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.98s 2026-03-08 00:19:08.704958 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 0.87s 2026-03-08 00:19:08.704969 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.55s 2026-03-08 00:19:08.704998 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.46s 2026-03-08 00:19:08.705010 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.09s 2026-03-08 00:19:08.705021 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.07s 2026-03-08 00:19:08.705032 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.07s 2026-03-08 00:19:08.705042 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.06s 2026-03-08 00:19:08.705053 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.06s 2026-03-08 00:19:08.705064 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2026-03-08 00:19:08.983141 | orchestrator | + osism apply sshconfig 2026-03-08 00:19:20.826955 | orchestrator | 2026-03-08 00:19:20 | INFO  | Prepare task for execution of sshconfig. 2026-03-08 00:19:20.892217 | orchestrator | 2026-03-08 00:19:20 | INFO  | Task ea407e4e-e97b-41df-bf3b-931eec1287e9 (sshconfig) was prepared for execution. 2026-03-08 00:19:20.892310 | orchestrator | 2026-03-08 00:19:20 | INFO  | It takes a moment until task ea407e4e-e97b-41df-bf3b-931eec1287e9 (sshconfig) has been started and output is visible here. 2026-03-08 00:19:31.338911 | orchestrator | 2026-03-08 00:19:31.339015 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-03-08 00:19:31.339027 | orchestrator | 2026-03-08 00:19:31.339036 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-03-08 00:19:31.339045 | orchestrator | Sunday 08 March 2026 00:19:24 +0000 (0:00:00.117) 0:00:00.117 ********** 2026-03-08 00:19:31.339053 | orchestrator | ok: [testbed-manager] 2026-03-08 00:19:31.339062 | orchestrator | 2026-03-08 00:19:31.339070 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-03-08 00:19:31.339078 | orchestrator | Sunday 08 March 2026 00:19:25 +0000 (0:00:00.506) 0:00:00.623 ********** 2026-03-08 00:19:31.339109 | orchestrator | changed: [testbed-manager] 2026-03-08 00:19:31.339118 | orchestrator | 2026-03-08 00:19:31.339126 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-03-08 00:19:31.339134 | orchestrator | Sunday 08 March 2026 00:19:25 +0000 (0:00:00.445) 0:00:01.069 ********** 2026-03-08 00:19:31.339142 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-03-08 00:19:31.339150 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-03-08 00:19:31.339158 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-03-08 00:19:31.339166 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-03-08 00:19:31.339174 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-03-08 00:19:31.339181 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-03-08 00:19:31.339189 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-03-08 00:19:31.339197 | orchestrator | 2026-03-08 00:19:31.339205 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-03-08 00:19:31.339212 | orchestrator | Sunday 08 March 2026 00:19:30 +0000 (0:00:05.031) 0:00:06.100 ********** 2026-03-08 00:19:31.339220 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:19:31.339228 | orchestrator | 2026-03-08 00:19:31.339236 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-03-08 00:19:31.339244 | orchestrator | Sunday 08 March 2026 00:19:30 +0000 (0:00:00.072) 0:00:06.173 ********** 2026-03-08 00:19:31.339251 | orchestrator | changed: [testbed-manager] 2026-03-08 00:19:31.339259 | orchestrator | 2026-03-08 00:19:31.339267 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:19:31.339276 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-08 00:19:31.339285 | orchestrator | 2026-03-08 00:19:31.339293 | orchestrator | 2026-03-08 00:19:31.339301 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 00:19:31.339309 | orchestrator | Sunday 08 March 2026 00:19:31 +0000 (0:00:00.502) 0:00:06.675 ********** 2026-03-08 00:19:31.339317 | orchestrator | =============================================================================== 2026-03-08 00:19:31.339325 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.03s 2026-03-08 00:19:31.339333 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.51s 2026-03-08 00:19:31.339341 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.50s 2026-03-08 00:19:31.339348 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.45s 2026-03-08 00:19:31.339356 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.07s 2026-03-08 00:19:31.525486 | orchestrator | + osism apply known-hosts 2026-03-08 00:19:43.339376 | orchestrator | 2026-03-08 00:19:43 | INFO  | Prepare task for execution of known-hosts. 2026-03-08 00:19:43.402093 | orchestrator | 2026-03-08 00:19:43 | INFO  | Task 2befe477-1ce3-407d-b90a-9f9d3fed15a3 (known-hosts) was prepared for execution. 2026-03-08 00:19:43.402189 | orchestrator | 2026-03-08 00:19:43 | INFO  | It takes a moment until task 2befe477-1ce3-407d-b90a-9f9d3fed15a3 (known-hosts) has been started and output is visible here. 2026-03-08 00:19:58.703488 | orchestrator | 2026-03-08 00:19:58.703610 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-03-08 00:19:58.703666 | orchestrator | 2026-03-08 00:19:58.703686 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-03-08 00:19:58.703707 | orchestrator | Sunday 08 March 2026 00:19:47 +0000 (0:00:00.146) 0:00:00.146 ********** 2026-03-08 00:19:58.703726 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-03-08 00:19:58.703746 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-03-08 00:19:58.703764 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-03-08 00:19:58.703809 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-03-08 00:19:58.703829 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-03-08 00:19:58.703849 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-03-08 00:19:58.703866 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-03-08 00:19:58.703885 | orchestrator | 2026-03-08 00:19:58.703904 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-03-08 00:19:58.703925 | orchestrator | Sunday 08 March 2026 00:19:52 +0000 (0:00:05.679) 0:00:05.826 ********** 2026-03-08 00:19:58.703957 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-03-08 00:19:58.703980 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-03-08 00:19:58.704001 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-03-08 00:19:58.704021 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-03-08 00:19:58.704042 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-03-08 00:19:58.704061 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-03-08 00:19:58.704081 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-03-08 00:19:58.704101 | orchestrator | 2026-03-08 00:19:58.704119 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-08 00:19:58.704133 | orchestrator | Sunday 08 March 2026 00:19:52 +0000 (0:00:00.161) 0:00:05.987 ********** 2026-03-08 00:19:58.704147 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQChO4SCPr9iDvWFPdEuDjeyMtc9RM/GN8p1QXrzctuipuNldk/SlYsTYGqeDdAjvSeXcMe3j36T8WLE0XD5hJQ39dEZlJ5lCsMCL/0uZnFve8ZBDOFef84WABMovwpx0NVlhbwKjtRP0Yqr+5cSlKYxoMIXqmxyWO1Vb3CvVshQ4ny6lHg7TvZt9Xdf5LOmKAJuiXftPBvdUgZijUvWoPDAfEtPIsAg5fJmg2wKeWGuq0vx5T8wVdUIchjhvZ1IVdAoVUKXsHnCcWxIzNYw0r5v4FPrpWAw1yA3xvaVwK1bbjxQLGdT4Sc4AnSYEFB0LG8VebhLDYV3dnEqbUdC5KIOb9bhIt4RzWdsHVF+bz0qM5fQf34VsfM/01AP5x32t+Wsq9fSKniNsPmyQXPfeh4e0ddpSi4rnzMTJKyAeE4+htIkbG0IygdPq7dj586YSCX0N6UNG9qTGb6Y7HhMQnU8PTs5LxmslmKEXutByfja1wBelxDjmROQWwiDfzPBSqs=) 2026-03-08 00:19:58.704167 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBB54ntfjVioNIHtOBr1Pp/ylrTi6wWScKJMRyFnm6qVkueje9m36IM/9SVVgAHUlUYSWVptxnlORPSwjeIY8Nw=) 2026-03-08 00:19:58.704189 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOLHSRvM4Bz9tclbfITFykguB0GYQhi85PuDZ7ZaFKgc) 2026-03-08 00:19:58.704209 | orchestrator | 2026-03-08 00:19:58.704228 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-08 00:19:58.704247 | orchestrator | Sunday 08 March 2026 00:19:54 +0000 (0:00:01.152) 0:00:07.140 ********** 2026-03-08 00:19:58.704266 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLuuvxA3WO+3CJjK4/OEwdJI5Vc2LNifSLSsMsRfidoQ0xSm6W56lo0klylP53mzQlBG9gdowPgImChM7Y/UcLs=) 2026-03-08 00:19:58.704323 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDCsIO1B/5js9Yr0SFOpITnT8eAcJOb/rAGjUBevKuz2d3Mw4IEYdXWLfVx2a9+Zh3XS1VO8+jp5DqhzPbQ0VjUX733+2HJd6YkU+CvHohb14g2MKUYhqvWAitZpuehdNcPQW05/eTmnz896W/G1YBUNBSmQ2FmiW7G2a51GqNvhXEDghOeflkCqNhW20jN7kNRLadXaxYJW79P6E9ia+mCl5CvSvsFh9UTN/HZfSG0qtYptRbfL0cm+gcsN35gCkmNtdwGEr7gS2iBygwr8fm38qt3NSxF5ral/LvHSb10oz7GVFBZAns60e/IPLhGx1AWlj9piTQS8l9SHMav4iS/VZAurH/2Q5u2kq3PazwHNW+wjmb2JviAU3F5/B40JSf7vtQGT0hmTXdExjckDxcGubL62B5akQR+yqR4KuueECJmAmeUA8wMo6PmgF2vQTi59q1av8abtS9QrPf4aLjaUg/aBMFcJY1xHY/oJdIAK8/6PLAXovtm57Mlbg5GJrM=) 2026-03-08 00:19:58.704359 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKtweYZtsHpC/e6EIVcW1cJ32/GwIb8qDkmZY1VNv0iL) 2026-03-08 00:19:58.704379 | orchestrator | 2026-03-08 00:19:58.704398 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-08 00:19:58.704416 | orchestrator | Sunday 08 March 2026 00:19:55 +0000 (0:00:01.061) 0:00:08.202 ********** 2026-03-08 00:19:58.704436 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDHHtQK6G0AwBpy7H88ok6FMuR7ZPZ50/KSy8ViK+iIW) 2026-03-08 00:19:58.704454 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDz3t+pZQFDGIHGkeSUKbGXeJZ+Kz5+4gt4TJ9r3icRuqHWzliU0sbngbY1YeVmfyjaHhc24Rf96Naq7Jipkle4oWz0irkQj3Nfc2vbcatlovCZU4WoeWPV3AKz5Z/ElIEJSAZXrsyMsVOkeataqRSR5kYmLrXy7fq8rHzJErajJpMg//oo2cD8ngmYH4C6oExAzM2p+bYTVwaFBsGPD5smegXGw2SIz5hKqXarQ9JOsFlqGdEbHKasrzqqlTn6l3snUay+bdnOcK0/gIoow+JPyLDsV1+7ct47p423ajtkwrV3eSWH7UJjAA+pRdw5x6S9O6AvIp2l+p3pEqYAK2MabPFeypvnVaRGFzY6ZbE+yt8ZMMGdqp5Ki0epzEuVgGl+G2Rv9vSDsj+Mndr9jif1CF//CPQXxXZps060TFN43o4n2/IXISQmkY0+5/qTOoc0eZASeas8k1YzdnENUZlqRz2qZA1asIBVhOQfiaiHz4CWw9/f8YTGuWWXlOp68Lc=) 2026-03-08 00:19:58.704556 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGMHS8tL0G7xDi9YyhvtvNC1J117ms6TGzzlKktr4O0DYzltv4iSNtParwr+uLhuXG5Rlxu7nEUsWygaddtcvHA=) 2026-03-08 00:19:58.704579 | orchestrator | 2026-03-08 00:19:58.704598 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-08 00:19:58.704669 | orchestrator | Sunday 08 March 2026 00:19:56 +0000 (0:00:00.975) 0:00:09.177 ********** 2026-03-08 00:19:58.704696 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIP69AUF38UtsBtLqYf6r3GpI9V9sCvhaPHutEbMohtQ3) 2026-03-08 00:19:58.704709 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDAJgEJyrFlq0PaUwl0jyn1sfHEmrFNgIsXashKTwTp6RGqWiWCOjWoTi2A5zIFdufjJYti2DFtgqf9f/UGEXWrmYe53nEzkeapWKjsjoNMwGW3bcOkVK038qsDifaJ/YlTS55CvxlBb2IM8S84dYNM/HRDVlqxnGApWlZqYEvMmWpKTA+7x3ui8uE7lio8AMSIwCu5TxLGILrRHJoYwfd2IVr2z/HSKDTte3glpQKRzcMmTBmOXrtLKUwlQQu/eaHigT5pU8f1EzeuQnQQrd6S5hvwDQ2q1LpvuK9PHwl3T3JtgHQRz5zgCQwijn1FEtCdg2J0HupBjutDD0Aj9MsFltiDEAPU3eQGc8DMitq38G02h8ubx5/jpVt7KEZgHRO7fm7etGLwKmTpLrvJBBVoi4ghsNs+1Tg47Sp9OAG2TRx6AxZyfwWvjo/J3QN6UT2MalDlHrcPeb6X9zdKfQSFGeERmOqWPptDCPq6X42nNhlXJbFB/u8k8Ng5hlBhuZM=) 2026-03-08 00:19:58.704721 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBAiYNOXRiG3rebdXgjqYkefzF/W8UZ5Vz45pgTG8BzNL2egXWKZMN7zvld98iP5J0NTh1Rjf/ehnwPP2ye10e4=) 2026-03-08 00:19:58.704732 | orchestrator | 2026-03-08 00:19:58.704743 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-08 00:19:58.704754 | orchestrator | Sunday 08 March 2026 00:19:57 +0000 (0:00:01.014) 0:00:10.191 ********** 2026-03-08 00:19:58.704764 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKvUUPE4dlDcFHitq6SNoo0hwJrI2qZD4y135QEUlc0xpAAfI62jZTtvnEhIIImfNTdI1VrKNf/77BEcLLauyFE=) 2026-03-08 00:19:58.704776 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCxTkXTPyXt/LSots8DVEBjepviUSPpXR407mpFcafKrNAq8HjBgXpy/EXyxDbt3aES9jYGL16kaM/jVQIu82auHXwWlATNTMGplYZSwB+X9ifu3MAowFk3M2u47HIMhPM9mmDK4wZb6Xn/RvK2OJOM5ZtNNx7p/w1H1eNL7A0uL1UAX3ONcxapA2QHsA9vezaYQ0cO/ezzky9PDXbDMxw9qrazlG+ZcAWecEjtPjegO8hUbnSbcmlHeUp62cn+BKRgKqH2fbvzOLelLY3TZBB2nNnF4tLk7cbidsj1DX6a+u4jN5FvrdoRPh08nRxBNUo303BN5RfnmD+FHZ+zfdZHwxt+pzdewDCwjVKO0yhBdGoQbucGwtkzVUH259pJVOWlsXxT9Lj9F4GfaPH2z0GK8QXpAbAJgznkevXxYbWPn73oQ/cnozfEr1tAsjJTjexsWzRVlTm0j1vIMEBL26DANdbPUJVg+lY0yvX+xMtKbHVGgc6uXgpgIKOsPHjYefs=) 2026-03-08 00:19:58.704796 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIB7BxT2HtdjJ3VNOAy1tgYSqgaaXWXmWpbKlW3pmjdNB) 2026-03-08 00:19:58.704807 | orchestrator | 2026-03-08 00:19:58.704817 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-08 00:19:58.704828 | orchestrator | Sunday 08 March 2026 00:19:58 +0000 (0:00:01.126) 0:00:11.317 ********** 2026-03-08 00:19:58.704849 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNOzsTNXSl/oF+UI7+B9TTrUvVBBE13gIx2fyMjk+jjnTWV8QtPLi0jvqiT+lvwFjya3P5qO0YvjeFDIuCw3CjI=) 2026-03-08 00:20:09.519283 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDJeqTyzNrd9voMhgeVq8LchlFwQpLMGLfIPL3RscADVWBsowA3nNwaW3h6pzqlMUXeJfCZt8CU0ygGpr4dh26AtL64s1SFGediLP/Vyb7rG4e8LKZoxYcP+oGFylZUf5xBQNI4lvKpLjiUEFQoxiV5FbvxonhDl7b9+fhOLTWqnAX9MwrlNMCr9knDUgc7VwHyJ46AAFrKhoPSOiqAX+Vri3H+99bVmWWlAhUEJS8Oj2WyuaB1soeVeg5ywSUwAczJierD/ytQwj3E71K4WXHu+JAs0UYOZo6b1yUpKZz0CI14w3OklM8fXA+niVk9TOaYGJyNkdT4mZh3W0zruwdDfpNuB5r7gN/W/8t17gGmCt7q0n3FYf3Bd+gngdUlQQGMd//ApMRjFdZnd/LX7PKFHdcKwBdUi8Tk2GDEm03+CRB57R+PONH01LZjDuC0REYIXEjeyyUcziqk4CN8OX3m1jIBjfWP13w0C7cJH8yOtfomnrIo+gA8LkArg0rHC3k=) 2026-03-08 00:20:09.519378 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICvScRemIB3o04Lfy5XaUf5s3Z69aOCTyR+uFyj2CNFJ) 2026-03-08 00:20:09.519396 | orchestrator | 2026-03-08 00:20:09.519408 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-08 00:20:09.519420 | orchestrator | Sunday 08 March 2026 00:19:59 +0000 (0:00:01.046) 0:00:12.364 ********** 2026-03-08 00:20:09.519432 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDYc4qbqrN4HpuFpVudWDVDLfDwgty8yEcdN4Duok6C0jyqVug0sOBQS/SoD6ON/R3CO+XrX7hnIGJ+2zeBIUmccKKtBvaHSv+AVD91f3IFlTGjUWVL6ng9xB7lNqZ6veUyzWVNh95ph/4IfXECVKg7ZMoBvH81pAmQ2eQwDUkpIuvLkD1CWrEhYxWEdhAetlP8JAZiR8fL37wuarG4V2ca9fQBCflNp7Z1oDNxc0UulVmPS4tAIBNoWuyG2YpawYn8W835qFLHU+Wy1x1USBFC1ZFvXUlOt0g+ut5WdS1YGEjl9xwkCQW439qtfiCrxv0eE30P2fcA58DxNaBAWY0Yfy1PccIVg/2yf//UVl6D+clEBT7FjKtodWYDeDtC/3iyw4wkw9+6JW+oa5ftlVIUe9GfaAcFqThJz1D8xT2HR9MejlFwoI7yuTNgrkfqLjEs/4fQFJ9EEFxT1iHax+CC+eqIt17aTjMQOlZBoRt3WOV4yfw0zWtsZzBnnt5lUrE=) 2026-03-08 00:20:09.519444 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPpSYyeFZwTnFyoxc8Wv3SPLgsqR3lepvF04R/XAsCueT1ClX1wYMVGN5rr+GrIGlheAhDfGUpVpXiskPRW5Doo=) 2026-03-08 00:20:09.519456 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIE19hmsQGduHyXqejiVwcSrbFPwO3HZZE1oDNrrwOgDQ) 2026-03-08 00:20:09.519468 | orchestrator | 2026-03-08 00:20:09.519479 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-03-08 00:20:09.519490 | orchestrator | Sunday 08 March 2026 00:20:00 +0000 (0:00:01.050) 0:00:13.415 ********** 2026-03-08 00:20:09.519502 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-03-08 00:20:09.519513 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-03-08 00:20:09.519524 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-03-08 00:20:09.519535 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-03-08 00:20:09.519546 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-03-08 00:20:09.519571 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-03-08 00:20:09.519583 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-03-08 00:20:09.519642 | orchestrator | 2026-03-08 00:20:09.519657 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-03-08 00:20:09.519669 | orchestrator | Sunday 08 March 2026 00:20:05 +0000 (0:00:05.191) 0:00:18.606 ********** 2026-03-08 00:20:09.519681 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-03-08 00:20:09.519693 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-03-08 00:20:09.519704 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-03-08 00:20:09.519714 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-03-08 00:20:09.519725 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-03-08 00:20:09.519736 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-03-08 00:20:09.519747 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-03-08 00:20:09.519758 | orchestrator | 2026-03-08 00:20:09.519785 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-08 00:20:09.519797 | orchestrator | Sunday 08 March 2026 00:20:05 +0000 (0:00:00.193) 0:00:18.800 ********** 2026-03-08 00:20:09.519811 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQChO4SCPr9iDvWFPdEuDjeyMtc9RM/GN8p1QXrzctuipuNldk/SlYsTYGqeDdAjvSeXcMe3j36T8WLE0XD5hJQ39dEZlJ5lCsMCL/0uZnFve8ZBDOFef84WABMovwpx0NVlhbwKjtRP0Yqr+5cSlKYxoMIXqmxyWO1Vb3CvVshQ4ny6lHg7TvZt9Xdf5LOmKAJuiXftPBvdUgZijUvWoPDAfEtPIsAg5fJmg2wKeWGuq0vx5T8wVdUIchjhvZ1IVdAoVUKXsHnCcWxIzNYw0r5v4FPrpWAw1yA3xvaVwK1bbjxQLGdT4Sc4AnSYEFB0LG8VebhLDYV3dnEqbUdC5KIOb9bhIt4RzWdsHVF+bz0qM5fQf34VsfM/01AP5x32t+Wsq9fSKniNsPmyQXPfeh4e0ddpSi4rnzMTJKyAeE4+htIkbG0IygdPq7dj586YSCX0N6UNG9qTGb6Y7HhMQnU8PTs5LxmslmKEXutByfja1wBelxDjmROQWwiDfzPBSqs=) 2026-03-08 00:20:09.519823 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBB54ntfjVioNIHtOBr1Pp/ylrTi6wWScKJMRyFnm6qVkueje9m36IM/9SVVgAHUlUYSWVptxnlORPSwjeIY8Nw=) 2026-03-08 00:20:09.519834 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOLHSRvM4Bz9tclbfITFykguB0GYQhi85PuDZ7ZaFKgc) 2026-03-08 00:20:09.519847 | orchestrator | 2026-03-08 00:20:09.519860 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-08 00:20:09.519873 | orchestrator | Sunday 08 March 2026 00:20:06 +0000 (0:00:01.022) 0:00:19.823 ********** 2026-03-08 00:20:09.519886 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKtweYZtsHpC/e6EIVcW1cJ32/GwIb8qDkmZY1VNv0iL) 2026-03-08 00:20:09.519900 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDCsIO1B/5js9Yr0SFOpITnT8eAcJOb/rAGjUBevKuz2d3Mw4IEYdXWLfVx2a9+Zh3XS1VO8+jp5DqhzPbQ0VjUX733+2HJd6YkU+CvHohb14g2MKUYhqvWAitZpuehdNcPQW05/eTmnz896W/G1YBUNBSmQ2FmiW7G2a51GqNvhXEDghOeflkCqNhW20jN7kNRLadXaxYJW79P6E9ia+mCl5CvSvsFh9UTN/HZfSG0qtYptRbfL0cm+gcsN35gCkmNtdwGEr7gS2iBygwr8fm38qt3NSxF5ral/LvHSb10oz7GVFBZAns60e/IPLhGx1AWlj9piTQS8l9SHMav4iS/VZAurH/2Q5u2kq3PazwHNW+wjmb2JviAU3F5/B40JSf7vtQGT0hmTXdExjckDxcGubL62B5akQR+yqR4KuueECJmAmeUA8wMo6PmgF2vQTi59q1av8abtS9QrPf4aLjaUg/aBMFcJY1xHY/oJdIAK8/6PLAXovtm57Mlbg5GJrM=) 2026-03-08 00:20:09.519921 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLuuvxA3WO+3CJjK4/OEwdJI5Vc2LNifSLSsMsRfidoQ0xSm6W56lo0klylP53mzQlBG9gdowPgImChM7Y/UcLs=) 2026-03-08 00:20:09.519935 | orchestrator | 2026-03-08 00:20:09.519948 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-08 00:20:09.519960 | orchestrator | Sunday 08 March 2026 00:20:07 +0000 (0:00:00.988) 0:00:20.811 ********** 2026-03-08 00:20:09.519973 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDHHtQK6G0AwBpy7H88ok6FMuR7ZPZ50/KSy8ViK+iIW) 2026-03-08 00:20:09.519987 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDz3t+pZQFDGIHGkeSUKbGXeJZ+Kz5+4gt4TJ9r3icRuqHWzliU0sbngbY1YeVmfyjaHhc24Rf96Naq7Jipkle4oWz0irkQj3Nfc2vbcatlovCZU4WoeWPV3AKz5Z/ElIEJSAZXrsyMsVOkeataqRSR5kYmLrXy7fq8rHzJErajJpMg//oo2cD8ngmYH4C6oExAzM2p+bYTVwaFBsGPD5smegXGw2SIz5hKqXarQ9JOsFlqGdEbHKasrzqqlTn6l3snUay+bdnOcK0/gIoow+JPyLDsV1+7ct47p423ajtkwrV3eSWH7UJjAA+pRdw5x6S9O6AvIp2l+p3pEqYAK2MabPFeypvnVaRGFzY6ZbE+yt8ZMMGdqp5Ki0epzEuVgGl+G2Rv9vSDsj+Mndr9jif1CF//CPQXxXZps060TFN43o4n2/IXISQmkY0+5/qTOoc0eZASeas8k1YzdnENUZlqRz2qZA1asIBVhOQfiaiHz4CWw9/f8YTGuWWXlOp68Lc=) 2026-03-08 00:20:09.520001 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGMHS8tL0G7xDi9YyhvtvNC1J117ms6TGzzlKktr4O0DYzltv4iSNtParwr+uLhuXG5Rlxu7nEUsWygaddtcvHA=) 2026-03-08 00:20:09.520013 | orchestrator | 2026-03-08 00:20:09.520026 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-08 00:20:09.520039 | orchestrator | Sunday 08 March 2026 00:20:08 +0000 (0:00:01.002) 0:00:21.813 ********** 2026-03-08 00:20:09.520052 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIP69AUF38UtsBtLqYf6r3GpI9V9sCvhaPHutEbMohtQ3) 2026-03-08 00:20:09.520083 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDAJgEJyrFlq0PaUwl0jyn1sfHEmrFNgIsXashKTwTp6RGqWiWCOjWoTi2A5zIFdufjJYti2DFtgqf9f/UGEXWrmYe53nEzkeapWKjsjoNMwGW3bcOkVK038qsDifaJ/YlTS55CvxlBb2IM8S84dYNM/HRDVlqxnGApWlZqYEvMmWpKTA+7x3ui8uE7lio8AMSIwCu5TxLGILrRHJoYwfd2IVr2z/HSKDTte3glpQKRzcMmTBmOXrtLKUwlQQu/eaHigT5pU8f1EzeuQnQQrd6S5hvwDQ2q1LpvuK9PHwl3T3JtgHQRz5zgCQwijn1FEtCdg2J0HupBjutDD0Aj9MsFltiDEAPU3eQGc8DMitq38G02h8ubx5/jpVt7KEZgHRO7fm7etGLwKmTpLrvJBBVoi4ghsNs+1Tg47Sp9OAG2TRx6AxZyfwWvjo/J3QN6UT2MalDlHrcPeb6X9zdKfQSFGeERmOqWPptDCPq6X42nNhlXJbFB/u8k8Ng5hlBhuZM=) 2026-03-08 00:20:14.045316 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBAiYNOXRiG3rebdXgjqYkefzF/W8UZ5Vz45pgTG8BzNL2egXWKZMN7zvld98iP5J0NTh1Rjf/ehnwPP2ye10e4=) 2026-03-08 00:20:14.045414 | orchestrator | 2026-03-08 00:20:14.045429 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-08 00:20:14.045441 | orchestrator | Sunday 08 March 2026 00:20:09 +0000 (0:00:01.027) 0:00:22.840 ********** 2026-03-08 00:20:14.045453 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCxTkXTPyXt/LSots8DVEBjepviUSPpXR407mpFcafKrNAq8HjBgXpy/EXyxDbt3aES9jYGL16kaM/jVQIu82auHXwWlATNTMGplYZSwB+X9ifu3MAowFk3M2u47HIMhPM9mmDK4wZb6Xn/RvK2OJOM5ZtNNx7p/w1H1eNL7A0uL1UAX3ONcxapA2QHsA9vezaYQ0cO/ezzky9PDXbDMxw9qrazlG+ZcAWecEjtPjegO8hUbnSbcmlHeUp62cn+BKRgKqH2fbvzOLelLY3TZBB2nNnF4tLk7cbidsj1DX6a+u4jN5FvrdoRPh08nRxBNUo303BN5RfnmD+FHZ+zfdZHwxt+pzdewDCwjVKO0yhBdGoQbucGwtkzVUH259pJVOWlsXxT9Lj9F4GfaPH2z0GK8QXpAbAJgznkevXxYbWPn73oQ/cnozfEr1tAsjJTjexsWzRVlTm0j1vIMEBL26DANdbPUJVg+lY0yvX+xMtKbHVGgc6uXgpgIKOsPHjYefs=) 2026-03-08 00:20:14.045467 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKvUUPE4dlDcFHitq6SNoo0hwJrI2qZD4y135QEUlc0xpAAfI62jZTtvnEhIIImfNTdI1VrKNf/77BEcLLauyFE=) 2026-03-08 00:20:14.045502 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIB7BxT2HtdjJ3VNOAy1tgYSqgaaXWXmWpbKlW3pmjdNB) 2026-03-08 00:20:14.045514 | orchestrator | 2026-03-08 00:20:14.045537 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-08 00:20:14.045547 | orchestrator | Sunday 08 March 2026 00:20:10 +0000 (0:00:00.979) 0:00:23.820 ********** 2026-03-08 00:20:14.045557 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDJeqTyzNrd9voMhgeVq8LchlFwQpLMGLfIPL3RscADVWBsowA3nNwaW3h6pzqlMUXeJfCZt8CU0ygGpr4dh26AtL64s1SFGediLP/Vyb7rG4e8LKZoxYcP+oGFylZUf5xBQNI4lvKpLjiUEFQoxiV5FbvxonhDl7b9+fhOLTWqnAX9MwrlNMCr9knDUgc7VwHyJ46AAFrKhoPSOiqAX+Vri3H+99bVmWWlAhUEJS8Oj2WyuaB1soeVeg5ywSUwAczJierD/ytQwj3E71K4WXHu+JAs0UYOZo6b1yUpKZz0CI14w3OklM8fXA+niVk9TOaYGJyNkdT4mZh3W0zruwdDfpNuB5r7gN/W/8t17gGmCt7q0n3FYf3Bd+gngdUlQQGMd//ApMRjFdZnd/LX7PKFHdcKwBdUi8Tk2GDEm03+CRB57R+PONH01LZjDuC0REYIXEjeyyUcziqk4CN8OX3m1jIBjfWP13w0C7cJH8yOtfomnrIo+gA8LkArg0rHC3k=) 2026-03-08 00:20:14.045567 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNOzsTNXSl/oF+UI7+B9TTrUvVBBE13gIx2fyMjk+jjnTWV8QtPLi0jvqiT+lvwFjya3P5qO0YvjeFDIuCw3CjI=) 2026-03-08 00:20:14.045577 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICvScRemIB3o04Lfy5XaUf5s3Z69aOCTyR+uFyj2CNFJ) 2026-03-08 00:20:14.045587 | orchestrator | 2026-03-08 00:20:14.045597 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-08 00:20:14.045666 | orchestrator | Sunday 08 March 2026 00:20:11 +0000 (0:00:01.023) 0:00:24.844 ********** 2026-03-08 00:20:14.045678 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDYc4qbqrN4HpuFpVudWDVDLfDwgty8yEcdN4Duok6C0jyqVug0sOBQS/SoD6ON/R3CO+XrX7hnIGJ+2zeBIUmccKKtBvaHSv+AVD91f3IFlTGjUWVL6ng9xB7lNqZ6veUyzWVNh95ph/4IfXECVKg7ZMoBvH81pAmQ2eQwDUkpIuvLkD1CWrEhYxWEdhAetlP8JAZiR8fL37wuarG4V2ca9fQBCflNp7Z1oDNxc0UulVmPS4tAIBNoWuyG2YpawYn8W835qFLHU+Wy1x1USBFC1ZFvXUlOt0g+ut5WdS1YGEjl9xwkCQW439qtfiCrxv0eE30P2fcA58DxNaBAWY0Yfy1PccIVg/2yf//UVl6D+clEBT7FjKtodWYDeDtC/3iyw4wkw9+6JW+oa5ftlVIUe9GfaAcFqThJz1D8xT2HR9MejlFwoI7yuTNgrkfqLjEs/4fQFJ9EEFxT1iHax+CC+eqIt17aTjMQOlZBoRt3WOV4yfw0zWtsZzBnnt5lUrE=) 2026-03-08 00:20:14.045689 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPpSYyeFZwTnFyoxc8Wv3SPLgsqR3lepvF04R/XAsCueT1ClX1wYMVGN5rr+GrIGlheAhDfGUpVpXiskPRW5Doo=) 2026-03-08 00:20:14.045698 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIE19hmsQGduHyXqejiVwcSrbFPwO3HZZE1oDNrrwOgDQ) 2026-03-08 00:20:14.045708 | orchestrator | 2026-03-08 00:20:14.045718 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-03-08 00:20:14.045728 | orchestrator | Sunday 08 March 2026 00:20:12 +0000 (0:00:01.033) 0:00:25.878 ********** 2026-03-08 00:20:14.045738 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-03-08 00:20:14.045748 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-03-08 00:20:14.045758 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-03-08 00:20:14.045768 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-03-08 00:20:14.045794 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-08 00:20:14.045805 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-03-08 00:20:14.045814 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-03-08 00:20:14.045824 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:20:14.045834 | orchestrator | 2026-03-08 00:20:14.045844 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-03-08 00:20:14.045853 | orchestrator | Sunday 08 March 2026 00:20:13 +0000 (0:00:00.165) 0:00:26.043 ********** 2026-03-08 00:20:14.045870 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:20:14.045880 | orchestrator | 2026-03-08 00:20:14.045890 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-03-08 00:20:14.045899 | orchestrator | Sunday 08 March 2026 00:20:13 +0000 (0:00:00.058) 0:00:26.102 ********** 2026-03-08 00:20:14.045909 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:20:14.045918 | orchestrator | 2026-03-08 00:20:14.045928 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-03-08 00:20:14.045937 | orchestrator | Sunday 08 March 2026 00:20:13 +0000 (0:00:00.049) 0:00:26.151 ********** 2026-03-08 00:20:14.045947 | orchestrator | changed: [testbed-manager] 2026-03-08 00:20:14.045956 | orchestrator | 2026-03-08 00:20:14.045966 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:20:14.045976 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-08 00:20:14.045987 | orchestrator | 2026-03-08 00:20:14.045997 | orchestrator | 2026-03-08 00:20:14.046006 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 00:20:14.046089 | orchestrator | Sunday 08 March 2026 00:20:13 +0000 (0:00:00.699) 0:00:26.851 ********** 2026-03-08 00:20:14.046101 | orchestrator | =============================================================================== 2026-03-08 00:20:14.046111 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.68s 2026-03-08 00:20:14.046121 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.19s 2026-03-08 00:20:14.046131 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2026-03-08 00:20:14.046141 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2026-03-08 00:20:14.046150 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2026-03-08 00:20:14.046160 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2026-03-08 00:20:14.046170 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2026-03-08 00:20:14.046179 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-03-08 00:20:14.046189 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-03-08 00:20:14.046198 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2026-03-08 00:20:14.046208 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2026-03-08 00:20:14.046218 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2026-03-08 00:20:14.046227 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2026-03-08 00:20:14.046244 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.99s 2026-03-08 00:20:14.046254 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.98s 2026-03-08 00:20:14.046264 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.98s 2026-03-08 00:20:14.046274 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.70s 2026-03-08 00:20:14.046284 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.19s 2026-03-08 00:20:14.046294 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.17s 2026-03-08 00:20:14.046304 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.16s 2026-03-08 00:20:14.341140 | orchestrator | + osism apply squid 2026-03-08 00:20:26.406425 | orchestrator | 2026-03-08 00:20:26 | INFO  | Prepare task for execution of squid. 2026-03-08 00:20:26.479333 | orchestrator | 2026-03-08 00:20:26 | INFO  | Task 586d0da7-4d47-45ba-8732-33d2d06e5171 (squid) was prepared for execution. 2026-03-08 00:20:26.479431 | orchestrator | 2026-03-08 00:20:26 | INFO  | It takes a moment until task 586d0da7-4d47-45ba-8732-33d2d06e5171 (squid) has been started and output is visible here. 2026-03-08 00:22:19.666753 | orchestrator | 2026-03-08 00:22:19.666874 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-03-08 00:22:19.666891 | orchestrator | 2026-03-08 00:22:19.666903 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-03-08 00:22:19.666915 | orchestrator | Sunday 08 March 2026 00:20:30 +0000 (0:00:00.119) 0:00:00.119 ********** 2026-03-08 00:22:19.666926 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-03-08 00:22:19.666938 | orchestrator | 2026-03-08 00:22:19.666949 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-03-08 00:22:19.666960 | orchestrator | Sunday 08 March 2026 00:20:30 +0000 (0:00:00.083) 0:00:00.203 ********** 2026-03-08 00:22:19.666971 | orchestrator | ok: [testbed-manager] 2026-03-08 00:22:19.666983 | orchestrator | 2026-03-08 00:22:19.666994 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-03-08 00:22:19.667005 | orchestrator | Sunday 08 March 2026 00:20:31 +0000 (0:00:01.098) 0:00:01.301 ********** 2026-03-08 00:22:19.667016 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-03-08 00:22:19.667027 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-03-08 00:22:19.667037 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-03-08 00:22:19.667048 | orchestrator | 2026-03-08 00:22:19.667059 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-03-08 00:22:19.667069 | orchestrator | Sunday 08 March 2026 00:20:32 +0000 (0:00:01.002) 0:00:02.303 ********** 2026-03-08 00:22:19.667080 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-03-08 00:22:19.667091 | orchestrator | 2026-03-08 00:22:19.667102 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-03-08 00:22:19.667112 | orchestrator | Sunday 08 March 2026 00:20:33 +0000 (0:00:00.954) 0:00:03.258 ********** 2026-03-08 00:22:19.667123 | orchestrator | ok: [testbed-manager] 2026-03-08 00:22:19.667133 | orchestrator | 2026-03-08 00:22:19.667144 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-03-08 00:22:19.667155 | orchestrator | Sunday 08 March 2026 00:20:34 +0000 (0:00:00.313) 0:00:03.572 ********** 2026-03-08 00:22:19.667165 | orchestrator | changed: [testbed-manager] 2026-03-08 00:22:19.667176 | orchestrator | 2026-03-08 00:22:19.667187 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-03-08 00:22:19.667197 | orchestrator | Sunday 08 March 2026 00:20:34 +0000 (0:00:00.791) 0:00:04.363 ********** 2026-03-08 00:22:19.667208 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-03-08 00:22:19.667219 | orchestrator | ok: [testbed-manager] 2026-03-08 00:22:19.667230 | orchestrator | 2026-03-08 00:22:19.667241 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-03-08 00:22:19.667251 | orchestrator | Sunday 08 March 2026 00:21:06 +0000 (0:00:31.580) 0:00:35.944 ********** 2026-03-08 00:22:19.667262 | orchestrator | changed: [testbed-manager] 2026-03-08 00:22:19.667272 | orchestrator | 2026-03-08 00:22:19.667299 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-03-08 00:22:19.667313 | orchestrator | Sunday 08 March 2026 00:21:18 +0000 (0:00:12.240) 0:00:48.184 ********** 2026-03-08 00:22:19.667326 | orchestrator | Pausing for 60 seconds 2026-03-08 00:22:19.667338 | orchestrator | changed: [testbed-manager] 2026-03-08 00:22:19.667351 | orchestrator | 2026-03-08 00:22:19.667364 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-03-08 00:22:19.667377 | orchestrator | Sunday 08 March 2026 00:22:18 +0000 (0:01:00.100) 0:01:48.285 ********** 2026-03-08 00:22:19.667390 | orchestrator | ok: [testbed-manager] 2026-03-08 00:22:19.667402 | orchestrator | 2026-03-08 00:22:19.667415 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-03-08 00:22:19.667452 | orchestrator | Sunday 08 March 2026 00:22:18 +0000 (0:00:00.069) 0:01:48.354 ********** 2026-03-08 00:22:19.667465 | orchestrator | changed: [testbed-manager] 2026-03-08 00:22:19.667477 | orchestrator | 2026-03-08 00:22:19.667490 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:22:19.667502 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:22:19.667515 | orchestrator | 2026-03-08 00:22:19.667527 | orchestrator | 2026-03-08 00:22:19.667560 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 00:22:19.667573 | orchestrator | Sunday 08 March 2026 00:22:19 +0000 (0:00:00.599) 0:01:48.953 ********** 2026-03-08 00:22:19.667586 | orchestrator | =============================================================================== 2026-03-08 00:22:19.667598 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.10s 2026-03-08 00:22:19.667608 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 31.58s 2026-03-08 00:22:19.667619 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.24s 2026-03-08 00:22:19.667630 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.10s 2026-03-08 00:22:19.667640 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.00s 2026-03-08 00:22:19.667651 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 0.95s 2026-03-08 00:22:19.667661 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.79s 2026-03-08 00:22:19.667672 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.60s 2026-03-08 00:22:19.667682 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.31s 2026-03-08 00:22:19.667693 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.08s 2026-03-08 00:22:19.667703 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2026-03-08 00:22:19.929738 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-08 00:22:19.929831 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla 2026-03-08 00:22:19.936247 | orchestrator | + set -e 2026-03-08 00:22:19.936312 | orchestrator | + NAMESPACE=kolla 2026-03-08 00:22:19.936325 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-03-08 00:22:19.943014 | orchestrator | ++ semver latest 9.0.0 2026-03-08 00:22:20.000864 | orchestrator | + [[ -1 -lt 0 ]] 2026-03-08 00:22:20.000957 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-08 00:22:20.001427 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-03-08 00:22:31.955819 | orchestrator | 2026-03-08 00:22:31 | INFO  | Prepare task for execution of operator. 2026-03-08 00:22:32.021560 | orchestrator | 2026-03-08 00:22:32 | INFO  | Task a4b4e0c1-5501-4950-bf2b-d5a0ea525090 (operator) was prepared for execution. 2026-03-08 00:22:32.021656 | orchestrator | 2026-03-08 00:22:32 | INFO  | It takes a moment until task a4b4e0c1-5501-4950-bf2b-d5a0ea525090 (operator) has been started and output is visible here. 2026-03-08 00:22:48.087803 | orchestrator | 2026-03-08 00:22:48.087910 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-03-08 00:22:48.087924 | orchestrator | 2026-03-08 00:22:48.087934 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-08 00:22:48.087944 | orchestrator | Sunday 08 March 2026 00:22:36 +0000 (0:00:00.110) 0:00:00.110 ********** 2026-03-08 00:22:48.087953 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:22:48.087965 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:22:48.087972 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:22:48.087979 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:22:48.087989 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:22:48.088000 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:22:48.088012 | orchestrator | 2026-03-08 00:22:48.088018 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-03-08 00:22:48.088045 | orchestrator | Sunday 08 March 2026 00:22:39 +0000 (0:00:03.378) 0:00:03.488 ********** 2026-03-08 00:22:48.088055 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:22:48.088066 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:22:48.088076 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:22:48.088083 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:22:48.088089 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:22:48.088095 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:22:48.088101 | orchestrator | 2026-03-08 00:22:48.088112 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-03-08 00:22:48.088122 | orchestrator | 2026-03-08 00:22:48.088132 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-03-08 00:22:48.088143 | orchestrator | Sunday 08 March 2026 00:22:40 +0000 (0:00:00.757) 0:00:04.246 ********** 2026-03-08 00:22:48.088154 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:22:48.088165 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:22:48.088171 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:22:48.088177 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:22:48.088183 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:22:48.088188 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:22:48.088194 | orchestrator | 2026-03-08 00:22:48.088200 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-03-08 00:22:48.088206 | orchestrator | Sunday 08 March 2026 00:22:40 +0000 (0:00:00.140) 0:00:04.386 ********** 2026-03-08 00:22:48.088212 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:22:48.088217 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:22:48.088223 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:22:48.088229 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:22:48.088255 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:22:48.088266 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:22:48.088276 | orchestrator | 2026-03-08 00:22:48.088287 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-03-08 00:22:48.088298 | orchestrator | Sunday 08 March 2026 00:22:40 +0000 (0:00:00.136) 0:00:04.523 ********** 2026-03-08 00:22:48.088309 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:22:48.088320 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:22:48.088330 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:22:48.088340 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:22:48.088350 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:22:48.088359 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:22:48.088370 | orchestrator | 2026-03-08 00:22:48.088380 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-03-08 00:22:48.088391 | orchestrator | Sunday 08 March 2026 00:22:41 +0000 (0:00:00.646) 0:00:05.170 ********** 2026-03-08 00:22:48.088401 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:22:48.088412 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:22:48.088423 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:22:48.088434 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:22:48.088446 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:22:48.088456 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:22:48.088466 | orchestrator | 2026-03-08 00:22:48.088477 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-03-08 00:22:48.088488 | orchestrator | Sunday 08 March 2026 00:22:42 +0000 (0:00:00.835) 0:00:06.005 ********** 2026-03-08 00:22:48.088499 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-03-08 00:22:48.088534 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-03-08 00:22:48.088544 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-03-08 00:22:48.088555 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-03-08 00:22:48.088564 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-03-08 00:22:48.088574 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-03-08 00:22:48.088585 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-03-08 00:22:48.088595 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-03-08 00:22:48.088605 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-03-08 00:22:48.088622 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-03-08 00:22:48.088630 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-03-08 00:22:48.088637 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-03-08 00:22:48.088644 | orchestrator | 2026-03-08 00:22:48.088651 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-03-08 00:22:48.088657 | orchestrator | Sunday 08 March 2026 00:22:43 +0000 (0:00:01.183) 0:00:07.189 ********** 2026-03-08 00:22:48.088664 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:22:48.088671 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:22:48.088679 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:22:48.088689 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:22:48.088700 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:22:48.088710 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:22:48.088720 | orchestrator | 2026-03-08 00:22:48.088731 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-03-08 00:22:48.088743 | orchestrator | Sunday 08 March 2026 00:22:44 +0000 (0:00:01.371) 0:00:08.560 ********** 2026-03-08 00:22:48.088754 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-03-08 00:22:48.088765 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-03-08 00:22:48.088771 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-03-08 00:22:48.088776 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-03-08 00:22:48.088782 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-03-08 00:22:48.088806 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-03-08 00:22:48.088816 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-03-08 00:22:48.088827 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-03-08 00:22:48.088837 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-03-08 00:22:48.088848 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-03-08 00:22:48.088858 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-03-08 00:22:48.088868 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-03-08 00:22:48.088878 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-03-08 00:22:48.088889 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-03-08 00:22:48.088899 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-03-08 00:22:48.088909 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-03-08 00:22:48.088919 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-03-08 00:22:48.088928 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-03-08 00:22:48.088937 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-03-08 00:22:48.088947 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-03-08 00:22:48.088957 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-03-08 00:22:48.088968 | orchestrator | 2026-03-08 00:22:48.088978 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-03-08 00:22:48.088990 | orchestrator | Sunday 08 March 2026 00:22:46 +0000 (0:00:01.321) 0:00:09.882 ********** 2026-03-08 00:22:48.089002 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:22:48.089011 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:22:48.089022 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:22:48.089037 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:22:48.089047 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:22:48.089058 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:22:48.089068 | orchestrator | 2026-03-08 00:22:48.089079 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-03-08 00:22:48.089096 | orchestrator | Sunday 08 March 2026 00:22:46 +0000 (0:00:00.150) 0:00:10.033 ********** 2026-03-08 00:22:48.089108 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:22:48.089122 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:22:48.089135 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:22:48.089147 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:22:48.089158 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:22:48.089170 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:22:48.089182 | orchestrator | 2026-03-08 00:22:48.089192 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-03-08 00:22:48.089202 | orchestrator | Sunday 08 March 2026 00:22:46 +0000 (0:00:00.206) 0:00:10.240 ********** 2026-03-08 00:22:48.089212 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:22:48.089221 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:22:48.089231 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:22:48.089241 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:22:48.089250 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:22:48.089260 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:22:48.089270 | orchestrator | 2026-03-08 00:22:48.089280 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-03-08 00:22:48.089290 | orchestrator | Sunday 08 March 2026 00:22:46 +0000 (0:00:00.568) 0:00:10.808 ********** 2026-03-08 00:22:48.089300 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:22:48.089310 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:22:48.089319 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:22:48.089330 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:22:48.089340 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:22:48.089350 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:22:48.089361 | orchestrator | 2026-03-08 00:22:48.089368 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-03-08 00:22:48.089373 | orchestrator | Sunday 08 March 2026 00:22:47 +0000 (0:00:00.189) 0:00:10.998 ********** 2026-03-08 00:22:48.089379 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-03-08 00:22:48.089385 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-08 00:22:48.089390 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:22:48.089396 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-08 00:22:48.089401 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:22:48.089407 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:22:48.089413 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-08 00:22:48.089418 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:22:48.089424 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-03-08 00:22:48.089430 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:22:48.089435 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-08 00:22:48.089441 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:22:48.089447 | orchestrator | 2026-03-08 00:22:48.089452 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-03-08 00:22:48.089458 | orchestrator | Sunday 08 March 2026 00:22:47 +0000 (0:00:00.705) 0:00:11.703 ********** 2026-03-08 00:22:48.089464 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:22:48.089469 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:22:48.089475 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:22:48.089480 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:22:48.089486 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:22:48.089492 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:22:48.089497 | orchestrator | 2026-03-08 00:22:48.089535 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-03-08 00:22:48.089541 | orchestrator | Sunday 08 March 2026 00:22:47 +0000 (0:00:00.131) 0:00:11.834 ********** 2026-03-08 00:22:48.089547 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:22:48.089553 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:22:48.089558 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:22:48.089566 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:22:48.089594 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:22:49.422640 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:22:49.422757 | orchestrator | 2026-03-08 00:22:49.422779 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-03-08 00:22:49.422795 | orchestrator | Sunday 08 March 2026 00:22:48 +0000 (0:00:00.149) 0:00:11.984 ********** 2026-03-08 00:22:49.422809 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:22:49.422822 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:22:49.422834 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:22:49.422847 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:22:49.422860 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:22:49.422872 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:22:49.422885 | orchestrator | 2026-03-08 00:22:49.422898 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-03-08 00:22:49.422911 | orchestrator | Sunday 08 March 2026 00:22:48 +0000 (0:00:00.144) 0:00:12.128 ********** 2026-03-08 00:22:49.422924 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:22:49.422936 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:22:49.422949 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:22:49.422962 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:22:49.422975 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:22:49.422987 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:22:49.423001 | orchestrator | 2026-03-08 00:22:49.423014 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-03-08 00:22:49.423028 | orchestrator | Sunday 08 March 2026 00:22:48 +0000 (0:00:00.718) 0:00:12.847 ********** 2026-03-08 00:22:49.423042 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:22:49.423055 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:22:49.423069 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:22:49.423083 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:22:49.423097 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:22:49.423111 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:22:49.423124 | orchestrator | 2026-03-08 00:22:49.423138 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:22:49.423155 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-08 00:22:49.423195 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-08 00:22:49.423211 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-08 00:22:49.423227 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-08 00:22:49.423242 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-08 00:22:49.423258 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-08 00:22:49.423273 | orchestrator | 2026-03-08 00:22:49.423287 | orchestrator | 2026-03-08 00:22:49.423302 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 00:22:49.423317 | orchestrator | Sunday 08 March 2026 00:22:49 +0000 (0:00:00.232) 0:00:13.079 ********** 2026-03-08 00:22:49.423332 | orchestrator | =============================================================================== 2026-03-08 00:22:49.423347 | orchestrator | Gathering Facts --------------------------------------------------------- 3.38s 2026-03-08 00:22:49.423361 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.37s 2026-03-08 00:22:49.423377 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.32s 2026-03-08 00:22:49.423417 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.18s 2026-03-08 00:22:49.423433 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.84s 2026-03-08 00:22:49.423448 | orchestrator | Do not require tty for all users ---------------------------------------- 0.76s 2026-03-08 00:22:49.423462 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.72s 2026-03-08 00:22:49.423476 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.71s 2026-03-08 00:22:49.423491 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.65s 2026-03-08 00:22:49.423532 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.57s 2026-03-08 00:22:49.423544 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.23s 2026-03-08 00:22:49.423557 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.21s 2026-03-08 00:22:49.423570 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.19s 2026-03-08 00:22:49.423583 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.15s 2026-03-08 00:22:49.423595 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.15s 2026-03-08 00:22:49.423608 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.14s 2026-03-08 00:22:49.423620 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.14s 2026-03-08 00:22:49.423634 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.14s 2026-03-08 00:22:49.423645 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.13s 2026-03-08 00:22:49.878817 | orchestrator | + osism apply --environment custom facts 2026-03-08 00:22:51.797805 | orchestrator | 2026-03-08 00:22:51 | INFO  | Trying to run play facts in environment custom 2026-03-08 00:23:01.897658 | orchestrator | 2026-03-08 00:23:01 | INFO  | Prepare task for execution of facts. 2026-03-08 00:23:01.970302 | orchestrator | 2026-03-08 00:23:01 | INFO  | Task b41db3c7-9a58-49fe-9c8b-59af1ae9b635 (facts) was prepared for execution. 2026-03-08 00:23:01.970397 | orchestrator | 2026-03-08 00:23:01 | INFO  | It takes a moment until task b41db3c7-9a58-49fe-9c8b-59af1ae9b635 (facts) has been started and output is visible here. 2026-03-08 00:23:46.663687 | orchestrator | 2026-03-08 00:23:46.663797 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-03-08 00:23:46.663813 | orchestrator | 2026-03-08 00:23:46.663826 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-08 00:23:46.663837 | orchestrator | Sunday 08 March 2026 00:23:06 +0000 (0:00:00.066) 0:00:00.066 ********** 2026-03-08 00:23:46.663848 | orchestrator | ok: [testbed-manager] 2026-03-08 00:23:46.663860 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:23:46.663872 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:23:46.663883 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:23:46.663894 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:23:46.663904 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:23:46.663915 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:23:46.663925 | orchestrator | 2026-03-08 00:23:46.663936 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-03-08 00:23:46.663947 | orchestrator | Sunday 08 March 2026 00:23:07 +0000 (0:00:01.375) 0:00:01.442 ********** 2026-03-08 00:23:46.663958 | orchestrator | ok: [testbed-manager] 2026-03-08 00:23:46.663968 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:23:46.663979 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:23:46.663990 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:23:46.664002 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:23:46.664027 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:23:46.664038 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:23:46.664049 | orchestrator | 2026-03-08 00:23:46.664079 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-03-08 00:23:46.664091 | orchestrator | 2026-03-08 00:23:46.664102 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-08 00:23:46.664112 | orchestrator | Sunday 08 March 2026 00:23:08 +0000 (0:00:01.181) 0:00:02.623 ********** 2026-03-08 00:23:46.664123 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:23:46.664134 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:23:46.664144 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:23:46.664155 | orchestrator | 2026-03-08 00:23:46.664166 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-08 00:23:46.664177 | orchestrator | Sunday 08 March 2026 00:23:08 +0000 (0:00:00.094) 0:00:02.718 ********** 2026-03-08 00:23:46.664188 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:23:46.664198 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:23:46.664209 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:23:46.664220 | orchestrator | 2026-03-08 00:23:46.664233 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-08 00:23:46.664246 | orchestrator | Sunday 08 March 2026 00:23:08 +0000 (0:00:00.194) 0:00:02.912 ********** 2026-03-08 00:23:46.664258 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:23:46.664270 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:23:46.664283 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:23:46.664295 | orchestrator | 2026-03-08 00:23:46.664307 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-08 00:23:46.664320 | orchestrator | Sunday 08 March 2026 00:23:09 +0000 (0:00:00.223) 0:00:03.135 ********** 2026-03-08 00:23:46.664333 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 00:23:46.664347 | orchestrator | 2026-03-08 00:23:46.664359 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-08 00:23:46.664372 | orchestrator | Sunday 08 March 2026 00:23:09 +0000 (0:00:00.123) 0:00:03.259 ********** 2026-03-08 00:23:46.664384 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:23:46.664396 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:23:46.664408 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:23:46.664420 | orchestrator | 2026-03-08 00:23:46.664445 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-08 00:23:46.664517 | orchestrator | Sunday 08 March 2026 00:23:09 +0000 (0:00:00.427) 0:00:03.686 ********** 2026-03-08 00:23:46.664530 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:23:46.664543 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:23:46.664556 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:23:46.664569 | orchestrator | 2026-03-08 00:23:46.664582 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-08 00:23:46.664596 | orchestrator | Sunday 08 March 2026 00:23:09 +0000 (0:00:00.111) 0:00:03.798 ********** 2026-03-08 00:23:46.664608 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:23:46.664618 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:23:46.664629 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:23:46.664640 | orchestrator | 2026-03-08 00:23:46.664651 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-08 00:23:46.664662 | orchestrator | Sunday 08 March 2026 00:23:10 +0000 (0:00:01.046) 0:00:04.844 ********** 2026-03-08 00:23:46.664672 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:23:46.664683 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:23:46.664694 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:23:46.664705 | orchestrator | 2026-03-08 00:23:46.664716 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-08 00:23:46.664727 | orchestrator | Sunday 08 March 2026 00:23:11 +0000 (0:00:00.470) 0:00:05.314 ********** 2026-03-08 00:23:46.664738 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:23:46.664748 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:23:46.664759 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:23:46.664770 | orchestrator | 2026-03-08 00:23:46.664790 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-08 00:23:46.664801 | orchestrator | Sunday 08 March 2026 00:23:12 +0000 (0:00:01.037) 0:00:06.352 ********** 2026-03-08 00:23:46.664812 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:23:46.664823 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:23:46.664834 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:23:46.664844 | orchestrator | 2026-03-08 00:23:46.664855 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-03-08 00:23:46.664866 | orchestrator | Sunday 08 March 2026 00:23:29 +0000 (0:00:16.715) 0:00:23.068 ********** 2026-03-08 00:23:46.664877 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:23:46.664887 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:23:46.664898 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:23:46.664909 | orchestrator | 2026-03-08 00:23:46.664919 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-03-08 00:23:46.664949 | orchestrator | Sunday 08 March 2026 00:23:29 +0000 (0:00:00.094) 0:00:23.162 ********** 2026-03-08 00:23:46.664961 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:23:46.664972 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:23:46.664982 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:23:46.664993 | orchestrator | 2026-03-08 00:23:46.665004 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-08 00:23:46.665015 | orchestrator | Sunday 08 March 2026 00:23:37 +0000 (0:00:08.224) 0:00:31.387 ********** 2026-03-08 00:23:46.665026 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:23:46.665036 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:23:46.665047 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:23:46.665058 | orchestrator | 2026-03-08 00:23:46.665069 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-03-08 00:23:46.665080 | orchestrator | Sunday 08 March 2026 00:23:37 +0000 (0:00:00.444) 0:00:31.832 ********** 2026-03-08 00:23:46.665091 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-03-08 00:23:46.665102 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-03-08 00:23:46.665113 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-03-08 00:23:46.665124 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-03-08 00:23:46.665135 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-03-08 00:23:46.665146 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-03-08 00:23:46.665157 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-03-08 00:23:46.665167 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-03-08 00:23:46.665178 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-03-08 00:23:46.665189 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-03-08 00:23:46.665200 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-03-08 00:23:46.665211 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-03-08 00:23:46.665221 | orchestrator | 2026-03-08 00:23:46.665232 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-08 00:23:46.665243 | orchestrator | Sunday 08 March 2026 00:23:41 +0000 (0:00:03.715) 0:00:35.548 ********** 2026-03-08 00:23:46.665254 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:23:46.665264 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:23:46.665275 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:23:46.665286 | orchestrator | 2026-03-08 00:23:46.665296 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-08 00:23:46.665307 | orchestrator | 2026-03-08 00:23:46.665318 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-08 00:23:46.665329 | orchestrator | Sunday 08 March 2026 00:23:42 +0000 (0:00:01.368) 0:00:36.916 ********** 2026-03-08 00:23:46.665340 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:23:46.665358 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:23:46.665369 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:23:46.665380 | orchestrator | ok: [testbed-manager] 2026-03-08 00:23:46.665390 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:23:46.665440 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:23:46.665477 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:23:46.665489 | orchestrator | 2026-03-08 00:23:46.665500 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:23:46.665511 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:23:46.665523 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:23:46.665535 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:23:46.665546 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:23:46.665557 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-08 00:23:46.665568 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-08 00:23:46.665579 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-08 00:23:46.665590 | orchestrator | 2026-03-08 00:23:46.665601 | orchestrator | 2026-03-08 00:23:46.665612 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 00:23:46.665623 | orchestrator | Sunday 08 March 2026 00:23:46 +0000 (0:00:03.770) 0:00:40.687 ********** 2026-03-08 00:23:46.665634 | orchestrator | =============================================================================== 2026-03-08 00:23:46.665644 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.72s 2026-03-08 00:23:46.665655 | orchestrator | Install required packages (Debian) -------------------------------------- 8.23s 2026-03-08 00:23:46.665666 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.77s 2026-03-08 00:23:46.665677 | orchestrator | Copy fact files --------------------------------------------------------- 3.72s 2026-03-08 00:23:46.665687 | orchestrator | Create custom facts directory ------------------------------------------- 1.38s 2026-03-08 00:23:46.665698 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.37s 2026-03-08 00:23:46.665717 | orchestrator | Copy fact file ---------------------------------------------------------- 1.18s 2026-03-08 00:23:46.896254 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.05s 2026-03-08 00:23:46.896358 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.04s 2026-03-08 00:23:46.896374 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.47s 2026-03-08 00:23:46.896385 | orchestrator | Create custom facts directory ------------------------------------------- 0.44s 2026-03-08 00:23:46.896396 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.43s 2026-03-08 00:23:46.896407 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.22s 2026-03-08 00:23:46.896418 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.19s 2026-03-08 00:23:46.896428 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.12s 2026-03-08 00:23:46.896440 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.11s 2026-03-08 00:23:46.896488 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.09s 2026-03-08 00:23:46.896501 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.09s 2026-03-08 00:23:47.174794 | orchestrator | + osism apply bootstrap 2026-03-08 00:23:59.218823 | orchestrator | 2026-03-08 00:23:59 | INFO  | Prepare task for execution of bootstrap. 2026-03-08 00:23:59.283891 | orchestrator | 2026-03-08 00:23:59 | INFO  | Task d1cb3179-9502-4af7-96fe-8c6c28066309 (bootstrap) was prepared for execution. 2026-03-08 00:23:59.283981 | orchestrator | 2026-03-08 00:23:59 | INFO  | It takes a moment until task d1cb3179-9502-4af7-96fe-8c6c28066309 (bootstrap) has been started and output is visible here. 2026-03-08 00:24:15.144498 | orchestrator | 2026-03-08 00:24:15.144611 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-03-08 00:24:15.144624 | orchestrator | 2026-03-08 00:24:15.144632 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-03-08 00:24:15.144640 | orchestrator | Sunday 08 March 2026 00:24:03 +0000 (0:00:00.106) 0:00:00.106 ********** 2026-03-08 00:24:15.144647 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:24:15.144656 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:24:15.144663 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:24:15.144670 | orchestrator | ok: [testbed-manager] 2026-03-08 00:24:15.144677 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:24:15.144684 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:24:15.144691 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:24:15.144699 | orchestrator | 2026-03-08 00:24:15.144705 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-08 00:24:15.144712 | orchestrator | 2026-03-08 00:24:15.144719 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-08 00:24:15.144727 | orchestrator | Sunday 08 March 2026 00:24:03 +0000 (0:00:00.172) 0:00:00.279 ********** 2026-03-08 00:24:15.144734 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:24:15.144741 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:24:15.144747 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:24:15.144754 | orchestrator | ok: [testbed-manager] 2026-03-08 00:24:15.144761 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:24:15.144768 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:24:15.144776 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:24:15.144784 | orchestrator | 2026-03-08 00:24:15.144791 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-03-08 00:24:15.144799 | orchestrator | 2026-03-08 00:24:15.144805 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-08 00:24:15.144812 | orchestrator | Sunday 08 March 2026 00:24:07 +0000 (0:00:03.826) 0:00:04.105 ********** 2026-03-08 00:24:15.144821 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-08 00:24:15.144828 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-08 00:24:15.144835 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-08 00:24:15.144842 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-08 00:24:15.144849 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-08 00:24:15.144856 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-08 00:24:15.144863 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-03-08 00:24:15.144869 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-08 00:24:15.144876 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-08 00:24:15.144884 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-03-08 00:24:15.144890 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-08 00:24:15.144898 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-08 00:24:15.144905 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-08 00:24:15.144912 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-08 00:24:15.144919 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-08 00:24:15.144925 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-03-08 00:24:15.144955 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-08 00:24:15.144963 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:24:15.144970 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-08 00:24:15.144977 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-08 00:24:15.144984 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:24:15.144990 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-03-08 00:24:15.144997 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-08 00:24:15.145004 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-03-08 00:24:15.145011 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-08 00:24:15.145018 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-03-08 00:24:15.145025 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-08 00:24:15.145039 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-08 00:24:15.145046 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-03-08 00:24:15.145053 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-08 00:24:15.145060 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-08 00:24:15.145067 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-03-08 00:24:15.145073 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-08 00:24:15.145080 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-08 00:24:15.145087 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-08 00:24:15.145094 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-08 00:24:15.145101 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-08 00:24:15.145108 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-08 00:24:15.145114 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:24:15.145122 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:24:15.145130 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-08 00:24:15.145138 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-03-08 00:24:15.145145 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-08 00:24:15.145152 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-03-08 00:24:15.145159 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:24:15.145166 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-08 00:24:15.145173 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-03-08 00:24:15.145198 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-08 00:24:15.145205 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-03-08 00:24:15.145213 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-08 00:24:15.145219 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-08 00:24:15.145226 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-08 00:24:15.145233 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:24:15.145239 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-08 00:24:15.145246 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-08 00:24:15.145252 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:24:15.145259 | orchestrator | 2026-03-08 00:24:15.145266 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-03-08 00:24:15.145272 | orchestrator | 2026-03-08 00:24:15.145280 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-03-08 00:24:15.145287 | orchestrator | Sunday 08 March 2026 00:24:07 +0000 (0:00:00.446) 0:00:04.551 ********** 2026-03-08 00:24:15.145293 | orchestrator | ok: [testbed-manager] 2026-03-08 00:24:15.145300 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:24:15.145314 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:24:15.145321 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:24:15.145326 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:24:15.145332 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:24:15.145338 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:24:15.145344 | orchestrator | 2026-03-08 00:24:15.145351 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-03-08 00:24:15.145357 | orchestrator | Sunday 08 March 2026 00:24:09 +0000 (0:00:01.322) 0:00:05.874 ********** 2026-03-08 00:24:15.145364 | orchestrator | ok: [testbed-manager] 2026-03-08 00:24:15.145370 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:24:15.145376 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:24:15.145382 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:24:15.145388 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:24:15.145396 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:24:15.145403 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:24:15.145411 | orchestrator | 2026-03-08 00:24:15.145418 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-03-08 00:24:15.145426 | orchestrator | Sunday 08 March 2026 00:24:10 +0000 (0:00:01.306) 0:00:07.180 ********** 2026-03-08 00:24:15.145465 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:24:15.145477 | orchestrator | 2026-03-08 00:24:15.145485 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-03-08 00:24:15.145492 | orchestrator | Sunday 08 March 2026 00:24:10 +0000 (0:00:00.269) 0:00:07.450 ********** 2026-03-08 00:24:15.145500 | orchestrator | changed: [testbed-manager] 2026-03-08 00:24:15.145507 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:24:15.145514 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:24:15.145521 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:24:15.145528 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:24:15.145534 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:24:15.145542 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:24:15.145549 | orchestrator | 2026-03-08 00:24:15.145556 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-03-08 00:24:15.145563 | orchestrator | Sunday 08 March 2026 00:24:12 +0000 (0:00:01.991) 0:00:09.441 ********** 2026-03-08 00:24:15.145569 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:24:15.145577 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:24:15.145586 | orchestrator | 2026-03-08 00:24:15.145593 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-03-08 00:24:15.145599 | orchestrator | Sunday 08 March 2026 00:24:12 +0000 (0:00:00.246) 0:00:09.687 ********** 2026-03-08 00:24:15.145606 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:24:15.145612 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:24:15.145619 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:24:15.145625 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:24:15.145632 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:24:15.145655 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:24:15.145662 | orchestrator | 2026-03-08 00:24:15.145668 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-03-08 00:24:15.145675 | orchestrator | Sunday 08 March 2026 00:24:13 +0000 (0:00:01.005) 0:00:10.693 ********** 2026-03-08 00:24:15.145682 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:24:15.145688 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:24:15.145695 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:24:15.145701 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:24:15.145707 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:24:15.145714 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:24:15.145728 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:24:15.145735 | orchestrator | 2026-03-08 00:24:15.145741 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-03-08 00:24:15.145750 | orchestrator | Sunday 08 March 2026 00:24:14 +0000 (0:00:00.555) 0:00:11.249 ********** 2026-03-08 00:24:15.145756 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:24:15.145762 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:24:15.145768 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:24:15.145774 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:24:15.145780 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:24:15.145787 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:24:15.145794 | orchestrator | ok: [testbed-manager] 2026-03-08 00:24:15.145801 | orchestrator | 2026-03-08 00:24:15.145808 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-03-08 00:24:15.145816 | orchestrator | Sunday 08 March 2026 00:24:14 +0000 (0:00:00.498) 0:00:11.747 ********** 2026-03-08 00:24:15.145822 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:24:15.145829 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:24:15.145847 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:24:26.745181 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:24:26.745292 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:24:26.745307 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:24:26.745319 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:24:26.745330 | orchestrator | 2026-03-08 00:24:26.745342 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-03-08 00:24:26.745355 | orchestrator | Sunday 08 March 2026 00:24:15 +0000 (0:00:00.264) 0:00:12.011 ********** 2026-03-08 00:24:26.745367 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:24:26.745396 | orchestrator | 2026-03-08 00:24:26.745408 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-03-08 00:24:26.745420 | orchestrator | Sunday 08 March 2026 00:24:15 +0000 (0:00:00.337) 0:00:12.348 ********** 2026-03-08 00:24:26.745505 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:24:26.745528 | orchestrator | 2026-03-08 00:24:26.745547 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-03-08 00:24:26.745570 | orchestrator | Sunday 08 March 2026 00:24:16 +0000 (0:00:00.428) 0:00:12.777 ********** 2026-03-08 00:24:26.745588 | orchestrator | ok: [testbed-manager] 2026-03-08 00:24:26.745608 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:24:26.745619 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:24:26.745630 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:24:26.745641 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:24:26.745652 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:24:26.745662 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:24:26.745673 | orchestrator | 2026-03-08 00:24:26.745684 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-03-08 00:24:26.745695 | orchestrator | Sunday 08 March 2026 00:24:17 +0000 (0:00:01.328) 0:00:14.105 ********** 2026-03-08 00:24:26.745708 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:24:26.745722 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:24:26.745735 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:24:26.745747 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:24:26.745760 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:24:26.745772 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:24:26.745784 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:24:26.745796 | orchestrator | 2026-03-08 00:24:26.745809 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-03-08 00:24:26.745849 | orchestrator | Sunday 08 March 2026 00:24:17 +0000 (0:00:00.228) 0:00:14.333 ********** 2026-03-08 00:24:26.745862 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:24:26.745874 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:24:26.745887 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:24:26.745899 | orchestrator | ok: [testbed-manager] 2026-03-08 00:24:26.745911 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:24:26.745923 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:24:26.745936 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:24:26.745948 | orchestrator | 2026-03-08 00:24:26.745961 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-03-08 00:24:26.745973 | orchestrator | Sunday 08 March 2026 00:24:18 +0000 (0:00:00.564) 0:00:14.898 ********** 2026-03-08 00:24:26.745986 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:24:26.745999 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:24:26.746011 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:24:26.746079 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:24:26.746090 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:24:26.746101 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:24:26.746111 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:24:26.746122 | orchestrator | 2026-03-08 00:24:26.746133 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-03-08 00:24:26.746145 | orchestrator | Sunday 08 March 2026 00:24:18 +0000 (0:00:00.245) 0:00:15.143 ********** 2026-03-08 00:24:26.746156 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:24:26.746167 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:24:26.746178 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:24:26.746189 | orchestrator | ok: [testbed-manager] 2026-03-08 00:24:26.746199 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:24:26.746210 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:24:26.746221 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:24:26.746232 | orchestrator | 2026-03-08 00:24:26.746243 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-03-08 00:24:26.746254 | orchestrator | Sunday 08 March 2026 00:24:18 +0000 (0:00:00.511) 0:00:15.654 ********** 2026-03-08 00:24:26.746265 | orchestrator | ok: [testbed-manager] 2026-03-08 00:24:26.746275 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:24:26.746286 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:24:26.746297 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:24:26.746307 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:24:26.746318 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:24:26.746328 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:24:26.746339 | orchestrator | 2026-03-08 00:24:26.746360 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-03-08 00:24:26.746371 | orchestrator | Sunday 08 March 2026 00:24:19 +0000 (0:00:01.082) 0:00:16.737 ********** 2026-03-08 00:24:26.746382 | orchestrator | ok: [testbed-manager] 2026-03-08 00:24:26.746392 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:24:26.746403 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:24:26.746414 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:24:26.746425 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:24:26.746466 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:24:26.746484 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:24:26.746503 | orchestrator | 2026-03-08 00:24:26.746523 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-03-08 00:24:26.746542 | orchestrator | Sunday 08 March 2026 00:24:21 +0000 (0:00:01.019) 0:00:17.757 ********** 2026-03-08 00:24:26.746585 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:24:26.746606 | orchestrator | 2026-03-08 00:24:26.746620 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-03-08 00:24:26.746631 | orchestrator | Sunday 08 March 2026 00:24:21 +0000 (0:00:00.332) 0:00:18.089 ********** 2026-03-08 00:24:26.746653 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:24:26.746664 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:24:26.746678 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:24:26.746696 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:24:26.746714 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:24:26.746734 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:24:26.746745 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:24:26.746755 | orchestrator | 2026-03-08 00:24:26.746766 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-08 00:24:26.746777 | orchestrator | Sunday 08 March 2026 00:24:22 +0000 (0:00:01.225) 0:00:19.315 ********** 2026-03-08 00:24:26.746787 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:24:26.746798 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:24:26.746808 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:24:26.746819 | orchestrator | ok: [testbed-manager] 2026-03-08 00:24:26.746830 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:24:26.746840 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:24:26.746850 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:24:26.746861 | orchestrator | 2026-03-08 00:24:26.746872 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-08 00:24:26.746882 | orchestrator | Sunday 08 March 2026 00:24:22 +0000 (0:00:00.202) 0:00:19.517 ********** 2026-03-08 00:24:26.746893 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:24:26.746904 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:24:26.746914 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:24:26.746925 | orchestrator | ok: [testbed-manager] 2026-03-08 00:24:26.746935 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:24:26.746946 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:24:26.746957 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:24:26.746967 | orchestrator | 2026-03-08 00:24:26.746978 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-08 00:24:26.746988 | orchestrator | Sunday 08 March 2026 00:24:22 +0000 (0:00:00.180) 0:00:19.698 ********** 2026-03-08 00:24:26.746999 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:24:26.747009 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:24:26.747020 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:24:26.747030 | orchestrator | ok: [testbed-manager] 2026-03-08 00:24:26.747041 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:24:26.747051 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:24:26.747062 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:24:26.747072 | orchestrator | 2026-03-08 00:24:26.747083 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-08 00:24:26.747094 | orchestrator | Sunday 08 March 2026 00:24:23 +0000 (0:00:00.180) 0:00:19.878 ********** 2026-03-08 00:24:26.747105 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:24:26.747118 | orchestrator | 2026-03-08 00:24:26.747129 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-08 00:24:26.747139 | orchestrator | Sunday 08 March 2026 00:24:23 +0000 (0:00:00.215) 0:00:20.093 ********** 2026-03-08 00:24:26.747150 | orchestrator | ok: [testbed-manager] 2026-03-08 00:24:26.747160 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:24:26.747171 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:24:26.747181 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:24:26.747192 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:24:26.747203 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:24:26.747213 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:24:26.747224 | orchestrator | 2026-03-08 00:24:26.747235 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-08 00:24:26.747245 | orchestrator | Sunday 08 March 2026 00:24:23 +0000 (0:00:00.585) 0:00:20.678 ********** 2026-03-08 00:24:26.747256 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:24:26.747267 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:24:26.747283 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:24:26.747293 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:24:26.747304 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:24:26.747315 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:24:26.747325 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:24:26.747336 | orchestrator | 2026-03-08 00:24:26.747347 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-08 00:24:26.747357 | orchestrator | Sunday 08 March 2026 00:24:24 +0000 (0:00:00.183) 0:00:20.862 ********** 2026-03-08 00:24:26.747368 | orchestrator | ok: [testbed-manager] 2026-03-08 00:24:26.747378 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:24:26.747389 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:24:26.747400 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:24:26.747410 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:24:26.747420 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:24:26.747478 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:24:26.747499 | orchestrator | 2026-03-08 00:24:26.747519 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-08 00:24:26.747538 | orchestrator | Sunday 08 March 2026 00:24:25 +0000 (0:00:01.019) 0:00:21.882 ********** 2026-03-08 00:24:26.747556 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:24:26.747568 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:24:26.747578 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:24:26.747589 | orchestrator | ok: [testbed-manager] 2026-03-08 00:24:26.747600 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:24:26.747610 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:24:26.747621 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:24:26.747632 | orchestrator | 2026-03-08 00:24:26.747643 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-08 00:24:26.747654 | orchestrator | Sunday 08 March 2026 00:24:25 +0000 (0:00:00.575) 0:00:22.457 ********** 2026-03-08 00:24:26.747665 | orchestrator | ok: [testbed-manager] 2026-03-08 00:24:26.747675 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:24:26.747686 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:24:26.747696 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:24:26.747715 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:25:07.021255 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:25:07.021314 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:25:07.021322 | orchestrator | 2026-03-08 00:25:07.021329 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-08 00:25:07.021336 | orchestrator | Sunday 08 March 2026 00:24:26 +0000 (0:00:01.060) 0:00:23.517 ********** 2026-03-08 00:25:07.021341 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:25:07.021348 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:25:07.021354 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:25:07.021360 | orchestrator | changed: [testbed-manager] 2026-03-08 00:25:07.021365 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:25:07.021372 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:25:07.021378 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:25:07.021383 | orchestrator | 2026-03-08 00:25:07.021390 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-03-08 00:25:07.021396 | orchestrator | Sunday 08 March 2026 00:24:43 +0000 (0:00:17.191) 0:00:40.709 ********** 2026-03-08 00:25:07.021403 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:25:07.021444 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:25:07.021451 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:25:07.021457 | orchestrator | ok: [testbed-manager] 2026-03-08 00:25:07.021463 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:25:07.021469 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:25:07.021474 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:25:07.021480 | orchestrator | 2026-03-08 00:25:07.021486 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-03-08 00:25:07.021493 | orchestrator | Sunday 08 March 2026 00:24:44 +0000 (0:00:00.214) 0:00:40.924 ********** 2026-03-08 00:25:07.021499 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:25:07.021522 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:25:07.021528 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:25:07.021534 | orchestrator | ok: [testbed-manager] 2026-03-08 00:25:07.021540 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:25:07.021546 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:25:07.021552 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:25:07.021558 | orchestrator | 2026-03-08 00:25:07.021564 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-03-08 00:25:07.021571 | orchestrator | Sunday 08 March 2026 00:24:44 +0000 (0:00:00.230) 0:00:41.154 ********** 2026-03-08 00:25:07.021577 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:25:07.021583 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:25:07.021589 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:25:07.021595 | orchestrator | ok: [testbed-manager] 2026-03-08 00:25:07.021601 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:25:07.021607 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:25:07.021612 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:25:07.021618 | orchestrator | 2026-03-08 00:25:07.021625 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-03-08 00:25:07.021631 | orchestrator | Sunday 08 March 2026 00:24:44 +0000 (0:00:00.222) 0:00:41.376 ********** 2026-03-08 00:25:07.021638 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:25:07.021646 | orchestrator | 2026-03-08 00:25:07.021653 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-03-08 00:25:07.021659 | orchestrator | Sunday 08 March 2026 00:24:44 +0000 (0:00:00.282) 0:00:41.659 ********** 2026-03-08 00:25:07.021664 | orchestrator | ok: [testbed-manager] 2026-03-08 00:25:07.021670 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:25:07.021676 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:25:07.021682 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:25:07.021698 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:25:07.021704 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:25:07.021710 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:25:07.021715 | orchestrator | 2026-03-08 00:25:07.021721 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-03-08 00:25:07.021727 | orchestrator | Sunday 08 March 2026 00:24:46 +0000 (0:00:01.901) 0:00:43.560 ********** 2026-03-08 00:25:07.021732 | orchestrator | changed: [testbed-manager] 2026-03-08 00:25:07.021738 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:25:07.021744 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:25:07.021750 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:25:07.021756 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:25:07.021762 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:25:07.021768 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:25:07.021773 | orchestrator | 2026-03-08 00:25:07.021780 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-03-08 00:25:07.021786 | orchestrator | Sunday 08 March 2026 00:24:47 +0000 (0:00:01.130) 0:00:44.691 ********** 2026-03-08 00:25:07.021792 | orchestrator | ok: [testbed-manager] 2026-03-08 00:25:07.021798 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:25:07.021803 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:25:07.021809 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:25:07.021815 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:25:07.021821 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:25:07.021827 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:25:07.021834 | orchestrator | 2026-03-08 00:25:07.021839 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-03-08 00:25:07.021845 | orchestrator | Sunday 08 March 2026 00:24:48 +0000 (0:00:00.890) 0:00:45.581 ********** 2026-03-08 00:25:07.021854 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:25:07.021867 | orchestrator | 2026-03-08 00:25:07.021874 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-03-08 00:25:07.021882 | orchestrator | Sunday 08 March 2026 00:24:49 +0000 (0:00:00.239) 0:00:45.821 ********** 2026-03-08 00:25:07.021889 | orchestrator | changed: [testbed-manager] 2026-03-08 00:25:07.021895 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:25:07.021902 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:25:07.021909 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:25:07.021916 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:25:07.021921 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:25:07.021926 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:25:07.021930 | orchestrator | 2026-03-08 00:25:07.021946 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-03-08 00:25:07.021951 | orchestrator | Sunday 08 March 2026 00:24:50 +0000 (0:00:01.014) 0:00:46.835 ********** 2026-03-08 00:25:07.021956 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:25:07.021960 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:25:07.021965 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:25:07.021969 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:25:07.021974 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:25:07.021978 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:25:07.021982 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:25:07.021987 | orchestrator | 2026-03-08 00:25:07.021991 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-03-08 00:25:07.021996 | orchestrator | Sunday 08 March 2026 00:24:50 +0000 (0:00:00.181) 0:00:47.017 ********** 2026-03-08 00:25:07.022001 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:25:07.022006 | orchestrator | 2026-03-08 00:25:07.022010 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-03-08 00:25:07.022048 | orchestrator | Sunday 08 March 2026 00:24:50 +0000 (0:00:00.257) 0:00:47.275 ********** 2026-03-08 00:25:07.022053 | orchestrator | ok: [testbed-manager] 2026-03-08 00:25:07.022057 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:25:07.022061 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:25:07.022064 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:25:07.022068 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:25:07.022072 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:25:07.022076 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:25:07.022080 | orchestrator | 2026-03-08 00:25:07.022083 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-03-08 00:25:07.022087 | orchestrator | Sunday 08 March 2026 00:24:52 +0000 (0:00:01.918) 0:00:49.193 ********** 2026-03-08 00:25:07.022091 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:25:07.022095 | orchestrator | changed: [testbed-manager] 2026-03-08 00:25:07.022098 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:25:07.022102 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:25:07.022106 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:25:07.022110 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:25:07.022114 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:25:07.022117 | orchestrator | 2026-03-08 00:25:07.022121 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-03-08 00:25:07.022125 | orchestrator | Sunday 08 March 2026 00:24:53 +0000 (0:00:01.155) 0:00:50.349 ********** 2026-03-08 00:25:07.022129 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:25:07.022132 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:25:07.022136 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:25:07.022140 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:25:07.022144 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:25:07.022148 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:25:07.022155 | orchestrator | changed: [testbed-manager] 2026-03-08 00:25:07.022159 | orchestrator | 2026-03-08 00:25:07.022163 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-03-08 00:25:07.022167 | orchestrator | Sunday 08 March 2026 00:25:04 +0000 (0:00:11.004) 0:01:01.353 ********** 2026-03-08 00:25:07.022170 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:25:07.022174 | orchestrator | ok: [testbed-manager] 2026-03-08 00:25:07.022178 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:25:07.022182 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:25:07.022185 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:25:07.022189 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:25:07.022193 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:25:07.022197 | orchestrator | 2026-03-08 00:25:07.022200 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-03-08 00:25:07.022204 | orchestrator | Sunday 08 March 2026 00:25:05 +0000 (0:00:01.025) 0:01:02.378 ********** 2026-03-08 00:25:07.022208 | orchestrator | ok: [testbed-manager] 2026-03-08 00:25:07.022212 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:25:07.022215 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:25:07.022219 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:25:07.022223 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:25:07.022227 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:25:07.022230 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:25:07.022234 | orchestrator | 2026-03-08 00:25:07.022238 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-03-08 00:25:07.022242 | orchestrator | Sunday 08 March 2026 00:25:06 +0000 (0:00:00.815) 0:01:03.193 ********** 2026-03-08 00:25:07.022245 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:25:07.022249 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:25:07.022253 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:25:07.022257 | orchestrator | ok: [testbed-manager] 2026-03-08 00:25:07.022260 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:25:07.022264 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:25:07.022268 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:25:07.022271 | orchestrator | 2026-03-08 00:25:07.022275 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-03-08 00:25:07.022279 | orchestrator | Sunday 08 March 2026 00:25:06 +0000 (0:00:00.176) 0:01:03.370 ********** 2026-03-08 00:25:07.022283 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:25:07.022287 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:25:07.022290 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:25:07.022297 | orchestrator | ok: [testbed-manager] 2026-03-08 00:25:07.022301 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:25:07.022304 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:25:07.022308 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:25:07.022312 | orchestrator | 2026-03-08 00:25:07.022315 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-03-08 00:25:07.022319 | orchestrator | Sunday 08 March 2026 00:25:06 +0000 (0:00:00.168) 0:01:03.538 ********** 2026-03-08 00:25:07.022324 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:25:07.022328 | orchestrator | 2026-03-08 00:25:07.022335 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-03-08 00:27:57.307863 | orchestrator | Sunday 08 March 2026 00:25:07 +0000 (0:00:00.225) 0:01:03.764 ********** 2026-03-08 00:27:57.307980 | orchestrator | ok: [testbed-manager] 2026-03-08 00:27:57.307996 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:27:57.308008 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:27:57.308018 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:27:57.308028 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:27:57.308037 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:27:57.308047 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:27:57.308056 | orchestrator | 2026-03-08 00:27:57.308067 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-03-08 00:27:57.308100 | orchestrator | Sunday 08 March 2026 00:25:08 +0000 (0:00:01.426) 0:01:05.190 ********** 2026-03-08 00:27:57.308111 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:27:57.308121 | orchestrator | changed: [testbed-manager] 2026-03-08 00:27:57.308131 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:27:57.308140 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:27:57.308150 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:27:57.308159 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:27:57.308168 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:27:57.308178 | orchestrator | 2026-03-08 00:27:57.308188 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-03-08 00:27:57.308198 | orchestrator | Sunday 08 March 2026 00:25:08 +0000 (0:00:00.526) 0:01:05.716 ********** 2026-03-08 00:27:57.308208 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:27:57.308217 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:27:57.308227 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:27:57.308236 | orchestrator | ok: [testbed-manager] 2026-03-08 00:27:57.308245 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:27:57.308255 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:27:57.308264 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:27:57.308274 | orchestrator | 2026-03-08 00:27:57.308283 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-03-08 00:27:57.308293 | orchestrator | Sunday 08 March 2026 00:25:09 +0000 (0:00:00.206) 0:01:05.923 ********** 2026-03-08 00:27:57.308302 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:27:57.308312 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:27:57.308345 | orchestrator | ok: [testbed-manager] 2026-03-08 00:27:57.308356 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:27:57.308368 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:27:57.308378 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:27:57.308449 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:27:57.308461 | orchestrator | 2026-03-08 00:27:57.308472 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-03-08 00:27:57.308483 | orchestrator | Sunday 08 March 2026 00:25:10 +0000 (0:00:01.072) 0:01:06.995 ********** 2026-03-08 00:27:57.308494 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:27:57.308505 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:27:57.308516 | orchestrator | changed: [testbed-manager] 2026-03-08 00:27:57.308526 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:27:57.308538 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:27:57.308549 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:27:57.308560 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:27:57.308571 | orchestrator | 2026-03-08 00:27:57.308582 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-03-08 00:27:57.308593 | orchestrator | Sunday 08 March 2026 00:25:11 +0000 (0:00:01.534) 0:01:08.530 ********** 2026-03-08 00:27:57.308604 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:27:57.308615 | orchestrator | ok: [testbed-manager] 2026-03-08 00:27:57.308626 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:27:57.308637 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:27:57.308649 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:27:57.308660 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:27:57.308671 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:27:57.308682 | orchestrator | 2026-03-08 00:27:57.308694 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-03-08 00:27:57.308704 | orchestrator | Sunday 08 March 2026 00:25:13 +0000 (0:00:01.945) 0:01:10.476 ********** 2026-03-08 00:27:57.308714 | orchestrator | ok: [testbed-manager] 2026-03-08 00:27:57.308723 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:27:57.308733 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:27:57.308742 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:27:57.308751 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:27:57.308761 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:27:57.308770 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:27:57.308779 | orchestrator | 2026-03-08 00:27:57.308789 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-03-08 00:27:57.308806 | orchestrator | Sunday 08 March 2026 00:26:20 +0000 (0:01:06.324) 0:02:16.800 ********** 2026-03-08 00:27:57.308816 | orchestrator | changed: [testbed-manager] 2026-03-08 00:27:57.308825 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:27:57.308835 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:27:57.308861 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:27:57.308883 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:27:57.308904 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:27:57.308943 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:27:57.308954 | orchestrator | 2026-03-08 00:27:57.308963 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-03-08 00:27:57.308973 | orchestrator | Sunday 08 March 2026 00:27:39 +0000 (0:01:19.850) 0:03:36.651 ********** 2026-03-08 00:27:57.308982 | orchestrator | ok: [testbed-manager] 2026-03-08 00:27:57.308992 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:27:57.309001 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:27:57.309011 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:27:57.309020 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:27:57.309030 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:27:57.309040 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:27:57.309049 | orchestrator | 2026-03-08 00:27:57.309059 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-03-08 00:27:57.309069 | orchestrator | Sunday 08 March 2026 00:27:42 +0000 (0:00:02.107) 0:03:38.758 ********** 2026-03-08 00:27:57.309079 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:27:57.309088 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:27:57.309097 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:27:57.309107 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:27:57.309117 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:27:57.309126 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:27:57.309136 | orchestrator | changed: [testbed-manager] 2026-03-08 00:27:57.309145 | orchestrator | 2026-03-08 00:27:57.309154 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-03-08 00:27:57.309164 | orchestrator | Sunday 08 March 2026 00:27:55 +0000 (0:00:13.119) 0:03:51.878 ********** 2026-03-08 00:27:57.309211 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-03-08 00:27:57.309234 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-03-08 00:27:57.309248 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-03-08 00:27:57.309260 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-03-08 00:27:57.309277 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-03-08 00:27:57.309287 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-03-08 00:27:57.309301 | orchestrator | 2026-03-08 00:27:57.309311 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-03-08 00:27:57.309339 | orchestrator | Sunday 08 March 2026 00:27:55 +0000 (0:00:00.365) 0:03:52.244 ********** 2026-03-08 00:27:57.309349 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-08 00:27:57.309359 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-08 00:27:57.309368 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:27:57.309378 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-08 00:27:57.309387 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:27:57.309397 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:27:57.309406 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-08 00:27:57.309416 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:27:57.309425 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-08 00:27:57.309444 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-08 00:27:57.309454 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-08 00:27:57.309463 | orchestrator | 2026-03-08 00:27:57.309473 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-03-08 00:27:57.309486 | orchestrator | Sunday 08 March 2026 00:27:57 +0000 (0:00:01.726) 0:03:53.971 ********** 2026-03-08 00:27:57.309496 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-08 00:27:57.309507 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-08 00:27:57.309521 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-08 00:27:57.309536 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-08 00:27:57.309553 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-08 00:27:57.309577 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-08 00:28:07.392814 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-08 00:28:07.392952 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-08 00:28:07.392980 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-08 00:28:07.392995 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-08 00:28:07.393007 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-08 00:28:07.393019 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-08 00:28:07.393030 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-08 00:28:07.393064 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-08 00:28:07.393076 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-08 00:28:07.393088 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-08 00:28:07.393099 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-08 00:28:07.393110 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-08 00:28:07.393121 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-08 00:28:07.393131 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-08 00:28:07.393148 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-08 00:28:07.393166 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-08 00:28:07.393184 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-08 00:28:07.393201 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-08 00:28:07.393219 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-08 00:28:07.393239 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-08 00:28:07.393258 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-08 00:28:07.393278 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:28:07.393291 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-08 00:28:07.393302 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-08 00:28:07.393349 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-08 00:28:07.393365 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:28:07.393378 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-08 00:28:07.393392 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-08 00:28:07.393405 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-08 00:28:07.393419 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:28:07.393432 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-08 00:28:07.393445 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-08 00:28:07.393458 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-08 00:28:07.393471 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-08 00:28:07.393484 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-08 00:28:07.393497 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-08 00:28:07.393526 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-08 00:28:07.393539 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:28:07.393551 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-08 00:28:07.393564 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-08 00:28:07.393577 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-08 00:28:07.393605 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-08 00:28:07.393618 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-08 00:28:07.393653 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-08 00:28:07.393665 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-08 00:28:07.393676 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-08 00:28:07.393686 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-08 00:28:07.393697 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-08 00:28:07.393708 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-08 00:28:07.393718 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-08 00:28:07.393729 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-08 00:28:07.393739 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-08 00:28:07.393750 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-08 00:28:07.393761 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-08 00:28:07.393771 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-08 00:28:07.393782 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-08 00:28:07.393793 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-08 00:28:07.393803 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-08 00:28:07.393814 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-08 00:28:07.393824 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-08 00:28:07.393835 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-08 00:28:07.393846 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-08 00:28:07.393856 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-08 00:28:07.393867 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-08 00:28:07.393877 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-08 00:28:07.393888 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-08 00:28:07.393898 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-08 00:28:07.393909 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-08 00:28:07.393920 | orchestrator | 2026-03-08 00:28:07.393931 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-03-08 00:28:07.393942 | orchestrator | Sunday 08 March 2026 00:28:04 +0000 (0:00:07.187) 0:04:01.158 ********** 2026-03-08 00:28:07.393953 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-08 00:28:07.393963 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-08 00:28:07.393974 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-08 00:28:07.393984 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-08 00:28:07.394002 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-08 00:28:07.394074 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-08 00:28:07.394088 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-08 00:28:07.394098 | orchestrator | 2026-03-08 00:28:07.394110 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-03-08 00:28:07.394120 | orchestrator | Sunday 08 March 2026 00:28:05 +0000 (0:00:01.554) 0:04:02.713 ********** 2026-03-08 00:28:07.394131 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-08 00:28:07.394148 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-08 00:28:07.394159 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:28:07.394170 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-08 00:28:07.394180 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:28:07.394191 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:28:07.394202 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-08 00:28:07.394213 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:28:07.394223 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-08 00:28:07.394234 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-08 00:28:07.394263 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-08 00:28:22.436398 | orchestrator | 2026-03-08 00:28:22.436480 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-03-08 00:28:22.436487 | orchestrator | Sunday 08 March 2026 00:28:07 +0000 (0:00:01.449) 0:04:04.163 ********** 2026-03-08 00:28:22.436492 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-08 00:28:22.436497 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:28:22.436502 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-08 00:28:22.436507 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-08 00:28:22.436511 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:28:22.436515 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:28:22.436519 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-08 00:28:22.436522 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:28:22.436526 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-08 00:28:22.436530 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-08 00:28:22.436534 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-08 00:28:22.436538 | orchestrator | 2026-03-08 00:28:22.436542 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-03-08 00:28:22.436545 | orchestrator | Sunday 08 March 2026 00:28:09 +0000 (0:00:01.593) 0:04:05.756 ********** 2026-03-08 00:28:22.436549 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-08 00:28:22.436553 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-08 00:28:22.436558 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:28:22.436561 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-08 00:28:22.436565 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:28:22.436584 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:28:22.436588 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-08 00:28:22.436592 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:28:22.436596 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-08 00:28:22.436600 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-08 00:28:22.436604 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-08 00:28:22.436608 | orchestrator | 2026-03-08 00:28:22.436611 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-03-08 00:28:22.436615 | orchestrator | Sunday 08 March 2026 00:28:10 +0000 (0:00:01.545) 0:04:07.301 ********** 2026-03-08 00:28:22.436619 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:28:22.436623 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:28:22.436627 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:28:22.436631 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:28:22.436634 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:28:22.436638 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:28:22.436642 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:28:22.436645 | orchestrator | 2026-03-08 00:28:22.436649 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-03-08 00:28:22.436653 | orchestrator | Sunday 08 March 2026 00:28:10 +0000 (0:00:00.283) 0:04:07.585 ********** 2026-03-08 00:28:22.436657 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:28:22.436661 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:28:22.436664 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:28:22.436668 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:28:22.436672 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:28:22.436675 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:28:22.436679 | orchestrator | ok: [testbed-manager] 2026-03-08 00:28:22.436683 | orchestrator | 2026-03-08 00:28:22.436686 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-03-08 00:28:22.436690 | orchestrator | Sunday 08 March 2026 00:28:16 +0000 (0:00:05.562) 0:04:13.148 ********** 2026-03-08 00:28:22.436694 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-03-08 00:28:22.436698 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-03-08 00:28:22.436702 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:28:22.436706 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:28:22.436710 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-03-08 00:28:22.436713 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-03-08 00:28:22.436717 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:28:22.436721 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:28:22.436725 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-03-08 00:28:22.436728 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-03-08 00:28:22.436732 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:28:22.436736 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:28:22.436739 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-03-08 00:28:22.436743 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:28:22.436747 | orchestrator | 2026-03-08 00:28:22.436750 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-03-08 00:28:22.436754 | orchestrator | Sunday 08 March 2026 00:28:16 +0000 (0:00:00.316) 0:04:13.464 ********** 2026-03-08 00:28:22.436758 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-03-08 00:28:22.436762 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-03-08 00:28:22.436766 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-03-08 00:28:22.436779 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-03-08 00:28:22.436783 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-03-08 00:28:22.436786 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-03-08 00:28:22.436793 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-03-08 00:28:22.436797 | orchestrator | 2026-03-08 00:28:22.436801 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-03-08 00:28:22.436805 | orchestrator | Sunday 08 March 2026 00:28:17 +0000 (0:00:01.088) 0:04:14.552 ********** 2026-03-08 00:28:22.436809 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:28:22.436814 | orchestrator | 2026-03-08 00:28:22.436818 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-03-08 00:28:22.436821 | orchestrator | Sunday 08 March 2026 00:28:18 +0000 (0:00:00.419) 0:04:14.972 ********** 2026-03-08 00:28:22.436825 | orchestrator | ok: [testbed-manager] 2026-03-08 00:28:22.436829 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:28:22.436833 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:28:22.436836 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:28:22.436840 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:28:22.436844 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:28:22.436847 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:28:22.436851 | orchestrator | 2026-03-08 00:28:22.436855 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-03-08 00:28:22.436859 | orchestrator | Sunday 08 March 2026 00:28:19 +0000 (0:00:01.625) 0:04:16.597 ********** 2026-03-08 00:28:22.436862 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:28:22.436866 | orchestrator | ok: [testbed-manager] 2026-03-08 00:28:22.436870 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:28:22.436874 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:28:22.436877 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:28:22.436881 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:28:22.436885 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:28:22.436888 | orchestrator | 2026-03-08 00:28:22.436892 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-03-08 00:28:22.436896 | orchestrator | Sunday 08 March 2026 00:28:20 +0000 (0:00:00.679) 0:04:17.277 ********** 2026-03-08 00:28:22.436899 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:28:22.436916 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:28:22.436920 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:28:22.436925 | orchestrator | changed: [testbed-manager] 2026-03-08 00:28:22.436929 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:28:22.436933 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:28:22.436937 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:28:22.436942 | orchestrator | 2026-03-08 00:28:22.436946 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-03-08 00:28:22.436950 | orchestrator | Sunday 08 March 2026 00:28:21 +0000 (0:00:00.659) 0:04:17.937 ********** 2026-03-08 00:28:22.436954 | orchestrator | ok: [testbed-manager] 2026-03-08 00:28:22.436959 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:28:22.436963 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:28:22.436968 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:28:22.436972 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:28:22.436977 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:28:22.436981 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:28:22.436985 | orchestrator | 2026-03-08 00:28:22.436990 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-03-08 00:28:22.436994 | orchestrator | Sunday 08 March 2026 00:28:21 +0000 (0:00:00.648) 0:04:18.586 ********** 2026-03-08 00:28:22.437001 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1772928275.1390116, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-08 00:28:22.437013 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1772928300.1039898, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-08 00:28:22.437018 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1772928307.2261064, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-08 00:28:22.437031 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1772928296.011722, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-08 00:28:27.947542 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1772928299.1537402, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-08 00:28:27.947667 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1772928304.0941312, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-08 00:28:27.947685 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1772928300.828896, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-08 00:28:27.947698 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-08 00:28:27.947733 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-08 00:28:27.947760 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-08 00:28:27.947773 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-08 00:28:27.947806 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-08 00:28:27.947819 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-08 00:28:27.947830 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-08 00:28:27.947842 | orchestrator | 2026-03-08 00:28:27.947856 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-03-08 00:28:27.947869 | orchestrator | Sunday 08 March 2026 00:28:22 +0000 (0:00:01.038) 0:04:19.624 ********** 2026-03-08 00:28:27.947880 | orchestrator | changed: [testbed-manager] 2026-03-08 00:28:27.947893 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:28:27.947904 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:28:27.947922 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:28:27.947933 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:28:27.947944 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:28:27.947955 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:28:27.947966 | orchestrator | 2026-03-08 00:28:27.947977 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-03-08 00:28:27.947988 | orchestrator | Sunday 08 March 2026 00:28:24 +0000 (0:00:01.146) 0:04:20.771 ********** 2026-03-08 00:28:27.947999 | orchestrator | changed: [testbed-manager] 2026-03-08 00:28:27.948010 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:28:27.948020 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:28:27.948031 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:28:27.948042 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:28:27.948053 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:28:27.948063 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:28:27.948074 | orchestrator | 2026-03-08 00:28:27.948085 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-03-08 00:28:27.948096 | orchestrator | Sunday 08 March 2026 00:28:25 +0000 (0:00:01.249) 0:04:22.020 ********** 2026-03-08 00:28:27.948107 | orchestrator | changed: [testbed-manager] 2026-03-08 00:28:27.948118 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:28:27.948128 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:28:27.948139 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:28:27.948150 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:28:27.948161 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:28:27.948171 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:28:27.948182 | orchestrator | 2026-03-08 00:28:27.948193 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-03-08 00:28:27.948209 | orchestrator | Sunday 08 March 2026 00:28:26 +0000 (0:00:01.236) 0:04:23.257 ********** 2026-03-08 00:28:27.948220 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:28:27.948232 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:28:27.948242 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:28:27.948253 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:28:27.948264 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:28:27.948274 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:28:27.948370 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:28:27.948388 | orchestrator | 2026-03-08 00:28:27.948399 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-03-08 00:28:27.948410 | orchestrator | Sunday 08 March 2026 00:28:26 +0000 (0:00:00.265) 0:04:23.523 ********** 2026-03-08 00:28:27.948421 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:28:27.948433 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:28:27.948444 | orchestrator | ok: [testbed-manager] 2026-03-08 00:28:27.948455 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:28:27.948466 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:28:27.948477 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:28:27.948488 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:28:27.948499 | orchestrator | 2026-03-08 00:28:27.948510 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-03-08 00:28:27.948521 | orchestrator | Sunday 08 March 2026 00:28:27 +0000 (0:00:00.764) 0:04:24.288 ********** 2026-03-08 00:28:27.948534 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:28:27.948547 | orchestrator | 2026-03-08 00:28:27.948558 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-03-08 00:28:27.948578 | orchestrator | Sunday 08 March 2026 00:28:27 +0000 (0:00:00.401) 0:04:24.689 ********** 2026-03-08 00:29:53.458690 | orchestrator | ok: [testbed-manager] 2026-03-08 00:29:53.458807 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:29:53.458819 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:29:53.458827 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:29:53.458855 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:29:53.458862 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:29:53.458869 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:29:53.458877 | orchestrator | 2026-03-08 00:29:53.458887 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-03-08 00:29:53.458896 | orchestrator | Sunday 08 March 2026 00:28:37 +0000 (0:00:09.792) 0:04:34.482 ********** 2026-03-08 00:29:53.458903 | orchestrator | ok: [testbed-manager] 2026-03-08 00:29:53.458911 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:29:53.458918 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:29:53.458925 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:29:53.458932 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:29:53.458939 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:29:53.458947 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:29:53.458970 | orchestrator | 2026-03-08 00:29:53.458985 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-03-08 00:29:53.458993 | orchestrator | Sunday 08 March 2026 00:28:39 +0000 (0:00:01.505) 0:04:35.987 ********** 2026-03-08 00:29:53.459000 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:29:53.459007 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:29:53.459014 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:29:53.459022 | orchestrator | ok: [testbed-manager] 2026-03-08 00:29:53.459029 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:29:53.459035 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:29:53.459043 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:29:53.459050 | orchestrator | 2026-03-08 00:29:53.459057 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-03-08 00:29:53.459065 | orchestrator | Sunday 08 March 2026 00:28:40 +0000 (0:00:01.015) 0:04:37.003 ********** 2026-03-08 00:29:53.459072 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:29:53.459079 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:29:53.459086 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:29:53.459093 | orchestrator | ok: [testbed-manager] 2026-03-08 00:29:53.459100 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:29:53.459107 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:29:53.459114 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:29:53.459121 | orchestrator | 2026-03-08 00:29:53.459128 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-03-08 00:29:53.459136 | orchestrator | Sunday 08 March 2026 00:28:40 +0000 (0:00:00.321) 0:04:37.325 ********** 2026-03-08 00:29:53.459143 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:29:53.459150 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:29:53.459157 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:29:53.459190 | orchestrator | ok: [testbed-manager] 2026-03-08 00:29:53.459197 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:29:53.459205 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:29:53.459212 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:29:53.459220 | orchestrator | 2026-03-08 00:29:53.459228 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-03-08 00:29:53.459237 | orchestrator | Sunday 08 March 2026 00:28:40 +0000 (0:00:00.283) 0:04:37.608 ********** 2026-03-08 00:29:53.459246 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:29:53.459253 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:29:53.459261 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:29:53.459269 | orchestrator | ok: [testbed-manager] 2026-03-08 00:29:53.459278 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:29:53.459286 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:29:53.459293 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:29:53.459302 | orchestrator | 2026-03-08 00:29:53.459309 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-03-08 00:29:53.459318 | orchestrator | Sunday 08 March 2026 00:28:41 +0000 (0:00:00.326) 0:04:37.934 ********** 2026-03-08 00:29:53.459326 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:29:53.459334 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:29:53.459343 | orchestrator | ok: [testbed-manager] 2026-03-08 00:29:53.459358 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:29:53.459366 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:29:53.459374 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:29:53.459386 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:29:53.459399 | orchestrator | 2026-03-08 00:29:53.459411 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-03-08 00:29:53.459424 | orchestrator | Sunday 08 March 2026 00:28:46 +0000 (0:00:05.612) 0:04:43.546 ********** 2026-03-08 00:29:53.459440 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:29:53.459455 | orchestrator | 2026-03-08 00:29:53.459469 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-03-08 00:29:53.459483 | orchestrator | Sunday 08 March 2026 00:28:47 +0000 (0:00:00.398) 0:04:43.944 ********** 2026-03-08 00:29:53.459496 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-03-08 00:29:53.459510 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-03-08 00:29:53.459523 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-03-08 00:29:53.459535 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-03-08 00:29:53.459549 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:29:53.459563 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-03-08 00:29:53.459575 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:29:53.459588 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-03-08 00:29:53.459601 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-03-08 00:29:53.459613 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-03-08 00:29:53.459625 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:29:53.459638 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-03-08 00:29:53.459650 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-03-08 00:29:53.459663 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:29:53.459676 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-03-08 00:29:53.459688 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:29:53.459723 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-03-08 00:29:53.459736 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:29:53.459750 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-03-08 00:29:53.459762 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-03-08 00:29:53.459775 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:29:53.459787 | orchestrator | 2026-03-08 00:29:53.459801 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-03-08 00:29:53.459814 | orchestrator | Sunday 08 March 2026 00:28:47 +0000 (0:00:00.363) 0:04:44.308 ********** 2026-03-08 00:29:53.459827 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:29:53.459840 | orchestrator | 2026-03-08 00:29:53.459852 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-03-08 00:29:53.459866 | orchestrator | Sunday 08 March 2026 00:28:47 +0000 (0:00:00.382) 0:04:44.690 ********** 2026-03-08 00:29:53.459878 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-03-08 00:29:53.459892 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-03-08 00:29:53.459905 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:29:53.459917 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:29:53.459930 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-03-08 00:29:53.459942 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-03-08 00:29:53.459966 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:29:53.459980 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-03-08 00:29:53.459993 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:29:53.460028 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-03-08 00:29:53.460042 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:29:53.460054 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:29:53.460067 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-03-08 00:29:53.460080 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:29:53.460093 | orchestrator | 2026-03-08 00:29:53.460106 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-03-08 00:29:53.460118 | orchestrator | Sunday 08 March 2026 00:28:48 +0000 (0:00:00.332) 0:04:45.022 ********** 2026-03-08 00:29:53.460131 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:29:53.460143 | orchestrator | 2026-03-08 00:29:53.460157 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-03-08 00:29:53.460198 | orchestrator | Sunday 08 March 2026 00:28:48 +0000 (0:00:00.441) 0:04:45.464 ********** 2026-03-08 00:29:53.460211 | orchestrator | changed: [testbed-manager] 2026-03-08 00:29:53.460223 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:29:53.460235 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:29:53.460246 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:29:53.460257 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:29:53.460269 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:29:53.460280 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:29:53.460292 | orchestrator | 2026-03-08 00:29:53.460304 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-03-08 00:29:53.460316 | orchestrator | Sunday 08 March 2026 00:29:24 +0000 (0:00:36.034) 0:05:21.499 ********** 2026-03-08 00:29:53.460328 | orchestrator | changed: [testbed-manager] 2026-03-08 00:29:53.460340 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:29:53.460351 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:29:53.460363 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:29:53.460375 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:29:53.460386 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:29:53.460404 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:29:53.460416 | orchestrator | 2026-03-08 00:29:53.460430 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-03-08 00:29:53.460443 | orchestrator | Sunday 08 March 2026 00:29:34 +0000 (0:00:10.038) 0:05:31.537 ********** 2026-03-08 00:29:53.460456 | orchestrator | changed: [testbed-manager] 2026-03-08 00:29:53.460470 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:29:53.460483 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:29:53.460496 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:29:53.460507 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:29:53.460519 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:29:53.460530 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:29:53.460541 | orchestrator | 2026-03-08 00:29:53.460552 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-03-08 00:29:53.460565 | orchestrator | Sunday 08 March 2026 00:29:43 +0000 (0:00:08.952) 0:05:40.490 ********** 2026-03-08 00:29:53.460576 | orchestrator | ok: [testbed-manager] 2026-03-08 00:29:53.460588 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:29:53.460600 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:29:53.460612 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:29:53.460624 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:29:53.460636 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:29:53.460647 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:29:53.460658 | orchestrator | 2026-03-08 00:29:53.460670 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-03-08 00:29:53.460694 | orchestrator | Sunday 08 March 2026 00:29:45 +0000 (0:00:02.062) 0:05:42.552 ********** 2026-03-08 00:29:53.460705 | orchestrator | changed: [testbed-manager] 2026-03-08 00:29:53.460715 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:29:53.460725 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:29:53.460735 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:29:53.460745 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:29:53.460755 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:29:53.460766 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:29:53.460776 | orchestrator | 2026-03-08 00:29:53.460802 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-03-08 00:30:05.257672 | orchestrator | Sunday 08 March 2026 00:29:53 +0000 (0:00:07.646) 0:05:50.198 ********** 2026-03-08 00:30:05.257792 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:30:05.257809 | orchestrator | 2026-03-08 00:30:05.257822 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-03-08 00:30:05.257834 | orchestrator | Sunday 08 March 2026 00:29:53 +0000 (0:00:00.426) 0:05:50.624 ********** 2026-03-08 00:30:05.257845 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:30:05.257857 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:30:05.257868 | orchestrator | changed: [testbed-manager] 2026-03-08 00:30:05.257878 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:30:05.257889 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:30:05.257900 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:30:05.257911 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:30:05.257921 | orchestrator | 2026-03-08 00:30:05.257932 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-03-08 00:30:05.257943 | orchestrator | Sunday 08 March 2026 00:29:54 +0000 (0:00:00.732) 0:05:51.356 ********** 2026-03-08 00:30:05.257954 | orchestrator | ok: [testbed-manager] 2026-03-08 00:30:05.257966 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:30:05.257977 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:30:05.257987 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:30:05.257998 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:30:05.258009 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:30:05.258113 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:30:05.258125 | orchestrator | 2026-03-08 00:30:05.258136 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-03-08 00:30:05.258168 | orchestrator | Sunday 08 March 2026 00:29:56 +0000 (0:00:02.221) 0:05:53.578 ********** 2026-03-08 00:30:05.258180 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:30:05.258191 | orchestrator | changed: [testbed-manager] 2026-03-08 00:30:05.258202 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:30:05.258215 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:30:05.258228 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:30:05.258241 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:30:05.258254 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:30:05.258266 | orchestrator | 2026-03-08 00:30:05.258278 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-03-08 00:30:05.258291 | orchestrator | Sunday 08 March 2026 00:29:57 +0000 (0:00:00.837) 0:05:54.416 ********** 2026-03-08 00:30:05.258304 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:30:05.258317 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:30:05.258329 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:30:05.258341 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:30:05.258353 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:30:05.258366 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:30:05.258378 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:30:05.258391 | orchestrator | 2026-03-08 00:30:05.258404 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-03-08 00:30:05.258416 | orchestrator | Sunday 08 March 2026 00:29:57 +0000 (0:00:00.279) 0:05:54.695 ********** 2026-03-08 00:30:05.258451 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:30:05.258465 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:30:05.258477 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:30:05.258489 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:30:05.258502 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:30:05.258515 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:30:05.258527 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:30:05.258539 | orchestrator | 2026-03-08 00:30:05.258553 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-03-08 00:30:05.258567 | orchestrator | Sunday 08 March 2026 00:29:58 +0000 (0:00:00.405) 0:05:55.100 ********** 2026-03-08 00:30:05.258579 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:30:05.258592 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:30:05.258603 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:30:05.258614 | orchestrator | ok: [testbed-manager] 2026-03-08 00:30:05.258624 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:30:05.258648 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:30:05.258659 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:30:05.258670 | orchestrator | 2026-03-08 00:30:05.258680 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-03-08 00:30:05.258691 | orchestrator | Sunday 08 March 2026 00:29:58 +0000 (0:00:00.325) 0:05:55.426 ********** 2026-03-08 00:30:05.258702 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:30:05.258713 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:30:05.258724 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:30:05.258735 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:30:05.258745 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:30:05.258756 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:30:05.258766 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:30:05.258777 | orchestrator | 2026-03-08 00:30:05.258788 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-03-08 00:30:05.258799 | orchestrator | Sunday 08 March 2026 00:29:58 +0000 (0:00:00.275) 0:05:55.701 ********** 2026-03-08 00:30:05.258810 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:30:05.258821 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:30:05.258831 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:30:05.258842 | orchestrator | ok: [testbed-manager] 2026-03-08 00:30:05.258853 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:30:05.258863 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:30:05.258874 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:30:05.258884 | orchestrator | 2026-03-08 00:30:05.258895 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-03-08 00:30:05.258906 | orchestrator | Sunday 08 March 2026 00:29:59 +0000 (0:00:00.305) 0:05:56.007 ********** 2026-03-08 00:30:05.258917 | orchestrator | ok: [testbed-node-3] =>  2026-03-08 00:30:05.258928 | orchestrator |  docker_version: 5:27.5.1 2026-03-08 00:30:05.258938 | orchestrator | ok: [testbed-node-4] =>  2026-03-08 00:30:05.258949 | orchestrator |  docker_version: 5:27.5.1 2026-03-08 00:30:05.258960 | orchestrator | ok: [testbed-node-5] =>  2026-03-08 00:30:05.258971 | orchestrator |  docker_version: 5:27.5.1 2026-03-08 00:30:05.258981 | orchestrator | ok: [testbed-manager] =>  2026-03-08 00:30:05.258992 | orchestrator |  docker_version: 5:27.5.1 2026-03-08 00:30:05.259022 | orchestrator | ok: [testbed-node-0] =>  2026-03-08 00:30:05.259034 | orchestrator |  docker_version: 5:27.5.1 2026-03-08 00:30:05.259045 | orchestrator | ok: [testbed-node-1] =>  2026-03-08 00:30:05.259056 | orchestrator |  docker_version: 5:27.5.1 2026-03-08 00:30:05.259066 | orchestrator | ok: [testbed-node-2] =>  2026-03-08 00:30:05.259077 | orchestrator |  docker_version: 5:27.5.1 2026-03-08 00:30:05.259088 | orchestrator | 2026-03-08 00:30:05.259099 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-03-08 00:30:05.259110 | orchestrator | Sunday 08 March 2026 00:29:59 +0000 (0:00:00.298) 0:05:56.305 ********** 2026-03-08 00:30:05.259121 | orchestrator | ok: [testbed-node-3] =>  2026-03-08 00:30:05.259139 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-08 00:30:05.259167 | orchestrator | ok: [testbed-node-4] =>  2026-03-08 00:30:05.259179 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-08 00:30:05.259190 | orchestrator | ok: [testbed-node-5] =>  2026-03-08 00:30:05.259200 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-08 00:30:05.259211 | orchestrator | ok: [testbed-manager] =>  2026-03-08 00:30:05.259222 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-08 00:30:05.259232 | orchestrator | ok: [testbed-node-0] =>  2026-03-08 00:30:05.259243 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-08 00:30:05.259254 | orchestrator | ok: [testbed-node-1] =>  2026-03-08 00:30:05.259265 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-08 00:30:05.259275 | orchestrator | ok: [testbed-node-2] =>  2026-03-08 00:30:05.259286 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-08 00:30:05.259297 | orchestrator | 2026-03-08 00:30:05.259308 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-03-08 00:30:05.259319 | orchestrator | Sunday 08 March 2026 00:29:59 +0000 (0:00:00.281) 0:05:56.587 ********** 2026-03-08 00:30:05.259330 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:30:05.259341 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:30:05.259351 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:30:05.259362 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:30:05.259373 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:30:05.259384 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:30:05.259395 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:30:05.259405 | orchestrator | 2026-03-08 00:30:05.259416 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-03-08 00:30:05.259427 | orchestrator | Sunday 08 March 2026 00:30:00 +0000 (0:00:00.286) 0:05:56.873 ********** 2026-03-08 00:30:05.259438 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:30:05.259449 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:30:05.259459 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:30:05.259470 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:30:05.259481 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:30:05.259492 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:30:05.259502 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:30:05.259513 | orchestrator | 2026-03-08 00:30:05.259524 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-03-08 00:30:05.259535 | orchestrator | Sunday 08 March 2026 00:30:00 +0000 (0:00:00.391) 0:05:57.264 ********** 2026-03-08 00:30:05.259548 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:30:05.259560 | orchestrator | 2026-03-08 00:30:05.259571 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-03-08 00:30:05.259582 | orchestrator | Sunday 08 March 2026 00:30:00 +0000 (0:00:00.428) 0:05:57.693 ********** 2026-03-08 00:30:05.259593 | orchestrator | ok: [testbed-manager] 2026-03-08 00:30:05.259604 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:30:05.259615 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:30:05.259626 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:30:05.259636 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:30:05.259647 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:30:05.259658 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:30:05.259669 | orchestrator | 2026-03-08 00:30:05.259680 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-03-08 00:30:05.259691 | orchestrator | Sunday 08 March 2026 00:30:01 +0000 (0:00:00.966) 0:05:58.659 ********** 2026-03-08 00:30:05.259707 | orchestrator | ok: [testbed-manager] 2026-03-08 00:30:05.259718 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:30:05.259728 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:30:05.259739 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:30:05.259750 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:30:05.259767 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:30:05.259777 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:30:05.259788 | orchestrator | 2026-03-08 00:30:05.259799 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-03-08 00:30:05.259812 | orchestrator | Sunday 08 March 2026 00:30:04 +0000 (0:00:02.940) 0:06:01.600 ********** 2026-03-08 00:30:05.259822 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-03-08 00:30:05.259834 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-03-08 00:30:05.259844 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-03-08 00:30:05.259855 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:30:05.259866 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-03-08 00:30:05.259877 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-03-08 00:30:05.259888 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-03-08 00:30:05.259898 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:30:05.259909 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-03-08 00:30:05.259920 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-03-08 00:30:05.259931 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-03-08 00:30:05.259942 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:30:05.259952 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-03-08 00:30:05.259964 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-03-08 00:30:05.259974 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-03-08 00:30:05.259985 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-03-08 00:30:05.260003 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-03-08 00:31:11.126409 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-03-08 00:31:11.126519 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:31:11.126537 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-03-08 00:31:11.126549 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-03-08 00:31:11.126561 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-03-08 00:31:11.126571 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:31:11.126582 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:31:11.126593 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-03-08 00:31:11.126604 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-03-08 00:31:11.126614 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-03-08 00:31:11.126625 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:31:11.126636 | orchestrator | 2026-03-08 00:31:11.126648 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-03-08 00:31:11.126661 | orchestrator | Sunday 08 March 2026 00:30:05 +0000 (0:00:00.748) 0:06:02.348 ********** 2026-03-08 00:31:11.126671 | orchestrator | ok: [testbed-manager] 2026-03-08 00:31:11.126682 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:31:11.126693 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:31:11.126703 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:31:11.126714 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:31:11.126725 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:31:11.126735 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:31:11.126745 | orchestrator | 2026-03-08 00:31:11.126756 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-03-08 00:31:11.126767 | orchestrator | Sunday 08 March 2026 00:30:12 +0000 (0:00:07.038) 0:06:09.387 ********** 2026-03-08 00:31:11.126778 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:31:11.126788 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:31:11.126799 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:31:11.126809 | orchestrator | ok: [testbed-manager] 2026-03-08 00:31:11.126820 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:31:11.126830 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:31:11.126865 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:31:11.126876 | orchestrator | 2026-03-08 00:31:11.126887 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-03-08 00:31:11.126897 | orchestrator | Sunday 08 March 2026 00:30:13 +0000 (0:00:01.113) 0:06:10.500 ********** 2026-03-08 00:31:11.126908 | orchestrator | ok: [testbed-manager] 2026-03-08 00:31:11.126918 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:31:11.126929 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:31:11.126939 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:31:11.126949 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:31:11.126960 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:31:11.126970 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:31:11.126981 | orchestrator | 2026-03-08 00:31:11.126991 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-03-08 00:31:11.127002 | orchestrator | Sunday 08 March 2026 00:30:23 +0000 (0:00:09.389) 0:06:19.890 ********** 2026-03-08 00:31:11.127013 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:31:11.127024 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:31:11.127034 | orchestrator | changed: [testbed-manager] 2026-03-08 00:31:11.127045 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:31:11.127085 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:31:11.127106 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:31:11.127125 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:31:11.127137 | orchestrator | 2026-03-08 00:31:11.127148 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-03-08 00:31:11.127158 | orchestrator | Sunday 08 March 2026 00:30:26 +0000 (0:00:03.223) 0:06:23.113 ********** 2026-03-08 00:31:11.127169 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:31:11.127179 | orchestrator | ok: [testbed-manager] 2026-03-08 00:31:11.127190 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:31:11.127200 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:31:11.127211 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:31:11.127221 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:31:11.127232 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:31:11.127242 | orchestrator | 2026-03-08 00:31:11.127267 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-03-08 00:31:11.127279 | orchestrator | Sunday 08 March 2026 00:30:27 +0000 (0:00:01.538) 0:06:24.652 ********** 2026-03-08 00:31:11.127289 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:31:11.127300 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:31:11.127310 | orchestrator | ok: [testbed-manager] 2026-03-08 00:31:11.127321 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:31:11.127331 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:31:11.127341 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:31:11.127352 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:31:11.127362 | orchestrator | 2026-03-08 00:31:11.127373 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-03-08 00:31:11.127383 | orchestrator | Sunday 08 March 2026 00:30:29 +0000 (0:00:01.338) 0:06:25.990 ********** 2026-03-08 00:31:11.127394 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:31:11.127404 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:31:11.127416 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:31:11.127426 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:31:11.127441 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:31:11.127458 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:31:11.127476 | orchestrator | changed: [testbed-manager] 2026-03-08 00:31:11.127494 | orchestrator | 2026-03-08 00:31:11.127512 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-03-08 00:31:11.127530 | orchestrator | Sunday 08 March 2026 00:30:30 +0000 (0:00:00.793) 0:06:26.784 ********** 2026-03-08 00:31:11.127549 | orchestrator | ok: [testbed-manager] 2026-03-08 00:31:11.127568 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:31:11.127585 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:31:11.127615 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:31:11.127633 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:31:11.127651 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:31:11.127668 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:31:11.127686 | orchestrator | 2026-03-08 00:31:11.127705 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-03-08 00:31:11.127749 | orchestrator | Sunday 08 March 2026 00:30:40 +0000 (0:00:10.512) 0:06:37.296 ********** 2026-03-08 00:31:11.127767 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:31:11.127777 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:31:11.127788 | orchestrator | changed: [testbed-manager] 2026-03-08 00:31:11.127798 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:31:11.127809 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:31:11.127820 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:31:11.127830 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:31:11.127841 | orchestrator | 2026-03-08 00:31:11.127851 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-03-08 00:31:11.127862 | orchestrator | Sunday 08 March 2026 00:30:41 +0000 (0:00:01.016) 0:06:38.313 ********** 2026-03-08 00:31:11.127873 | orchestrator | ok: [testbed-manager] 2026-03-08 00:31:11.127884 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:31:11.127898 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:31:11.127916 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:31:11.127935 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:31:11.127954 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:31:11.127971 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:31:11.127989 | orchestrator | 2026-03-08 00:31:11.128001 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-03-08 00:31:11.128012 | orchestrator | Sunday 08 March 2026 00:30:51 +0000 (0:00:10.147) 0:06:48.460 ********** 2026-03-08 00:31:11.128022 | orchestrator | ok: [testbed-manager] 2026-03-08 00:31:11.128034 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:31:11.128051 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:31:11.128095 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:31:11.128112 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:31:11.128129 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:31:11.128147 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:31:11.128166 | orchestrator | 2026-03-08 00:31:11.128185 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-03-08 00:31:11.128205 | orchestrator | Sunday 08 March 2026 00:31:03 +0000 (0:00:12.111) 0:07:00.572 ********** 2026-03-08 00:31:11.128223 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-03-08 00:31:11.128239 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-03-08 00:31:11.128250 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-03-08 00:31:11.128261 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-03-08 00:31:11.128272 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-03-08 00:31:11.128282 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-03-08 00:31:11.128293 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-03-08 00:31:11.128303 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-03-08 00:31:11.128314 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-03-08 00:31:11.128324 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-03-08 00:31:11.128335 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-03-08 00:31:11.128346 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-03-08 00:31:11.128356 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-03-08 00:31:11.128367 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-03-08 00:31:11.128377 | orchestrator | 2026-03-08 00:31:11.128388 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-03-08 00:31:11.128399 | orchestrator | Sunday 08 March 2026 00:31:05 +0000 (0:00:01.267) 0:07:01.839 ********** 2026-03-08 00:31:11.128420 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:31:11.128431 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:31:11.128441 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:31:11.128452 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:31:11.128462 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:31:11.128473 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:31:11.128483 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:31:11.128494 | orchestrator | 2026-03-08 00:31:11.128505 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-03-08 00:31:11.128515 | orchestrator | Sunday 08 March 2026 00:31:05 +0000 (0:00:00.534) 0:07:02.374 ********** 2026-03-08 00:31:11.128526 | orchestrator | ok: [testbed-manager] 2026-03-08 00:31:11.128537 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:31:11.128548 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:31:11.128558 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:31:11.128569 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:31:11.128580 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:31:11.128590 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:31:11.128601 | orchestrator | 2026-03-08 00:31:11.128611 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-03-08 00:31:11.128624 | orchestrator | Sunday 08 March 2026 00:31:10 +0000 (0:00:04.550) 0:07:06.924 ********** 2026-03-08 00:31:11.128635 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:31:11.128646 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:31:11.128656 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:31:11.128667 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:31:11.128677 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:31:11.128688 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:31:11.128699 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:31:11.128709 | orchestrator | 2026-03-08 00:31:11.128721 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-03-08 00:31:11.128732 | orchestrator | Sunday 08 March 2026 00:31:10 +0000 (0:00:00.656) 0:07:07.581 ********** 2026-03-08 00:31:11.128743 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-03-08 00:31:11.128753 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-03-08 00:31:11.128806 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-03-08 00:31:11.128818 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-03-08 00:31:11.128835 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:31:11.128853 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-03-08 00:31:11.128870 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-03-08 00:31:11.128887 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:31:11.128905 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-03-08 00:31:11.128935 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-03-08 00:31:30.468137 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:31:30.468264 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-03-08 00:31:30.468277 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-03-08 00:31:30.468284 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:31:30.468290 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-03-08 00:31:30.468296 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-03-08 00:31:30.468302 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:31:30.468308 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:31:30.468347 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-03-08 00:31:30.468354 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-03-08 00:31:30.468360 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:31:30.468367 | orchestrator | 2026-03-08 00:31:30.468374 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-03-08 00:31:30.468405 | orchestrator | Sunday 08 March 2026 00:31:11 +0000 (0:00:00.539) 0:07:08.120 ********** 2026-03-08 00:31:30.468411 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:31:30.468417 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:31:30.468423 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:31:30.468429 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:31:30.468435 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:31:30.468441 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:31:30.468446 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:31:30.468452 | orchestrator | 2026-03-08 00:31:30.468458 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-03-08 00:31:30.468464 | orchestrator | Sunday 08 March 2026 00:31:11 +0000 (0:00:00.473) 0:07:08.594 ********** 2026-03-08 00:31:30.468470 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:31:30.468476 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:31:30.468482 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:31:30.468488 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:31:30.468494 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:31:30.468499 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:31:30.468505 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:31:30.468511 | orchestrator | 2026-03-08 00:31:30.468517 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-03-08 00:31:30.468523 | orchestrator | Sunday 08 March 2026 00:31:12 +0000 (0:00:00.476) 0:07:09.070 ********** 2026-03-08 00:31:30.468529 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:31:30.468535 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:31:30.468541 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:31:30.468547 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:31:30.468552 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:31:30.468558 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:31:30.468564 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:31:30.468570 | orchestrator | 2026-03-08 00:31:30.468576 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-03-08 00:31:30.468582 | orchestrator | Sunday 08 March 2026 00:31:13 +0000 (0:00:00.679) 0:07:09.750 ********** 2026-03-08 00:31:30.468588 | orchestrator | ok: [testbed-manager] 2026-03-08 00:31:30.468594 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:31:30.468600 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:31:30.468606 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:31:30.468612 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:31:30.468617 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:31:30.468623 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:31:30.468628 | orchestrator | 2026-03-08 00:31:30.468634 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-03-08 00:31:30.468640 | orchestrator | Sunday 08 March 2026 00:31:14 +0000 (0:00:01.868) 0:07:11.618 ********** 2026-03-08 00:31:30.468648 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:31:30.468657 | orchestrator | 2026-03-08 00:31:30.468677 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-03-08 00:31:30.468683 | orchestrator | Sunday 08 March 2026 00:31:15 +0000 (0:00:00.847) 0:07:12.466 ********** 2026-03-08 00:31:30.468689 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:31:30.468695 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:31:30.468701 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:31:30.468707 | orchestrator | ok: [testbed-manager] 2026-03-08 00:31:30.468713 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:31:30.468720 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:31:30.468726 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:31:30.468732 | orchestrator | 2026-03-08 00:31:30.468738 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-03-08 00:31:30.468751 | orchestrator | Sunday 08 March 2026 00:31:16 +0000 (0:00:00.845) 0:07:13.312 ********** 2026-03-08 00:31:30.468758 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:31:30.468764 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:31:30.468770 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:31:30.468776 | orchestrator | ok: [testbed-manager] 2026-03-08 00:31:30.468782 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:31:30.468788 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:31:30.468794 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:31:30.468800 | orchestrator | 2026-03-08 00:31:30.468806 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-03-08 00:31:30.468813 | orchestrator | Sunday 08 March 2026 00:31:17 +0000 (0:00:01.087) 0:07:14.399 ********** 2026-03-08 00:31:30.468819 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:31:30.468826 | orchestrator | ok: [testbed-manager] 2026-03-08 00:31:30.468832 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:31:30.468838 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:31:30.468844 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:31:30.468850 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:31:30.468856 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:31:30.468862 | orchestrator | 2026-03-08 00:31:30.468869 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-03-08 00:31:30.468896 | orchestrator | Sunday 08 March 2026 00:31:19 +0000 (0:00:01.433) 0:07:15.832 ********** 2026-03-08 00:31:30.468904 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:31:30.468911 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:31:30.468917 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:31:30.468923 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:31:30.468929 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:31:30.468935 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:31:30.468940 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:31:30.468946 | orchestrator | 2026-03-08 00:31:30.468952 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-03-08 00:31:30.468958 | orchestrator | Sunday 08 March 2026 00:31:20 +0000 (0:00:01.463) 0:07:17.296 ********** 2026-03-08 00:31:30.468963 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:31:30.468970 | orchestrator | ok: [testbed-manager] 2026-03-08 00:31:30.468976 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:31:30.468982 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:31:30.468988 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:31:30.468994 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:31:30.469000 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:31:30.469006 | orchestrator | 2026-03-08 00:31:30.469013 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-03-08 00:31:30.469019 | orchestrator | Sunday 08 March 2026 00:31:21 +0000 (0:00:01.298) 0:07:18.594 ********** 2026-03-08 00:31:30.469025 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:31:30.469052 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:31:30.469058 | orchestrator | changed: [testbed-manager] 2026-03-08 00:31:30.469064 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:31:30.469070 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:31:30.469075 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:31:30.469081 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:31:30.469087 | orchestrator | 2026-03-08 00:31:30.469092 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-03-08 00:31:30.469098 | orchestrator | Sunday 08 March 2026 00:31:23 +0000 (0:00:01.438) 0:07:20.033 ********** 2026-03-08 00:31:30.469105 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:31:30.469112 | orchestrator | 2026-03-08 00:31:30.469117 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-03-08 00:31:30.469123 | orchestrator | Sunday 08 March 2026 00:31:24 +0000 (0:00:01.042) 0:07:21.075 ********** 2026-03-08 00:31:30.469141 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:31:30.469147 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:31:30.469153 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:31:30.469158 | orchestrator | ok: [testbed-manager] 2026-03-08 00:31:30.469164 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:31:30.469169 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:31:30.469175 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:31:30.469181 | orchestrator | 2026-03-08 00:31:30.469187 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-03-08 00:31:30.469192 | orchestrator | Sunday 08 March 2026 00:31:25 +0000 (0:00:01.463) 0:07:22.539 ********** 2026-03-08 00:31:30.469198 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:31:30.469203 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:31:30.469209 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:31:30.469214 | orchestrator | ok: [testbed-manager] 2026-03-08 00:31:30.469220 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:31:30.469226 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:31:30.469232 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:31:30.469237 | orchestrator | 2026-03-08 00:31:30.469243 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-03-08 00:31:30.469248 | orchestrator | Sunday 08 March 2026 00:31:26 +0000 (0:00:01.186) 0:07:23.725 ********** 2026-03-08 00:31:30.469254 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:31:30.469260 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:31:30.469265 | orchestrator | ok: [testbed-manager] 2026-03-08 00:31:30.469271 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:31:30.469277 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:31:30.469282 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:31:30.469288 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:31:30.469293 | orchestrator | 2026-03-08 00:31:30.469299 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-03-08 00:31:30.469305 | orchestrator | Sunday 08 March 2026 00:31:28 +0000 (0:00:01.305) 0:07:25.030 ********** 2026-03-08 00:31:30.469311 | orchestrator | ok: [testbed-manager] 2026-03-08 00:31:30.469317 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:31:30.469323 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:31:30.469329 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:31:30.469335 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:31:30.469341 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:31:30.469348 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:31:30.469353 | orchestrator | 2026-03-08 00:31:30.469359 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-03-08 00:31:30.469366 | orchestrator | Sunday 08 March 2026 00:31:29 +0000 (0:00:01.132) 0:07:26.163 ********** 2026-03-08 00:31:30.469372 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:31:30.469378 | orchestrator | 2026-03-08 00:31:30.469384 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-08 00:31:30.469390 | orchestrator | Sunday 08 March 2026 00:31:30 +0000 (0:00:00.864) 0:07:27.028 ********** 2026-03-08 00:31:30.469395 | orchestrator | 2026-03-08 00:31:30.469401 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-08 00:31:30.469406 | orchestrator | Sunday 08 March 2026 00:31:30 +0000 (0:00:00.039) 0:07:27.068 ********** 2026-03-08 00:31:30.469411 | orchestrator | 2026-03-08 00:31:30.469417 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-08 00:31:30.469423 | orchestrator | Sunday 08 March 2026 00:31:30 +0000 (0:00:00.048) 0:07:27.117 ********** 2026-03-08 00:31:30.469429 | orchestrator | 2026-03-08 00:31:30.469434 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-08 00:31:30.469441 | orchestrator | Sunday 08 March 2026 00:31:30 +0000 (0:00:00.050) 0:07:27.168 ********** 2026-03-08 00:31:30.469446 | orchestrator | 2026-03-08 00:31:30.469463 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-08 00:31:57.534612 | orchestrator | Sunday 08 March 2026 00:31:30 +0000 (0:00:00.038) 0:07:27.206 ********** 2026-03-08 00:31:57.534710 | orchestrator | 2026-03-08 00:31:57.534722 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-08 00:31:57.534729 | orchestrator | Sunday 08 March 2026 00:31:30 +0000 (0:00:00.047) 0:07:27.254 ********** 2026-03-08 00:31:57.534735 | orchestrator | 2026-03-08 00:31:57.534743 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-08 00:31:57.534749 | orchestrator | Sunday 08 March 2026 00:31:30 +0000 (0:00:00.039) 0:07:27.294 ********** 2026-03-08 00:31:57.534755 | orchestrator | 2026-03-08 00:31:57.534761 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-08 00:31:57.534768 | orchestrator | Sunday 08 March 2026 00:31:30 +0000 (0:00:00.039) 0:07:27.333 ********** 2026-03-08 00:31:57.534775 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:31:57.534783 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:31:57.534790 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:31:57.534797 | orchestrator | 2026-03-08 00:31:57.534804 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-03-08 00:31:57.534811 | orchestrator | Sunday 08 March 2026 00:31:32 +0000 (0:00:01.519) 0:07:28.853 ********** 2026-03-08 00:31:57.534817 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:31:57.534826 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:31:57.534833 | orchestrator | changed: [testbed-manager] 2026-03-08 00:31:57.534840 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:31:57.534846 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:31:57.534853 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:31:57.534860 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:31:57.534867 | orchestrator | 2026-03-08 00:31:57.534874 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-03-08 00:31:57.534881 | orchestrator | Sunday 08 March 2026 00:31:33 +0000 (0:00:01.680) 0:07:30.534 ********** 2026-03-08 00:31:57.534888 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:31:57.534894 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:31:57.534901 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:31:57.534908 | orchestrator | changed: [testbed-manager] 2026-03-08 00:31:57.534914 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:31:57.534921 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:31:57.534928 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:31:57.534934 | orchestrator | 2026-03-08 00:31:57.534941 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-03-08 00:31:57.534947 | orchestrator | Sunday 08 March 2026 00:31:35 +0000 (0:00:01.245) 0:07:31.779 ********** 2026-03-08 00:31:57.534954 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:31:57.534960 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:31:57.534967 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:31:57.534973 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:31:57.534980 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:31:57.535054 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:31:57.535063 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:31:57.535069 | orchestrator | 2026-03-08 00:31:57.535076 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-03-08 00:31:57.535083 | orchestrator | Sunday 08 March 2026 00:31:37 +0000 (0:00:02.417) 0:07:34.196 ********** 2026-03-08 00:31:57.535090 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:31:57.535097 | orchestrator | 2026-03-08 00:31:57.535104 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-03-08 00:31:57.535111 | orchestrator | Sunday 08 March 2026 00:31:37 +0000 (0:00:00.084) 0:07:34.281 ********** 2026-03-08 00:31:57.535118 | orchestrator | ok: [testbed-manager] 2026-03-08 00:31:57.535125 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:31:57.535132 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:31:57.535139 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:31:57.535170 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:31:57.535179 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:31:57.535187 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:31:57.535194 | orchestrator | 2026-03-08 00:31:57.535216 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-03-08 00:31:57.535224 | orchestrator | Sunday 08 March 2026 00:31:38 +0000 (0:00:01.062) 0:07:35.343 ********** 2026-03-08 00:31:57.535231 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:31:57.535238 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:31:57.535245 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:31:57.535253 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:31:57.535260 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:31:57.535267 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:31:57.535274 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:31:57.535281 | orchestrator | 2026-03-08 00:31:57.535287 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-03-08 00:31:57.535294 | orchestrator | Sunday 08 March 2026 00:31:39 +0000 (0:00:00.780) 0:07:36.123 ********** 2026-03-08 00:31:57.535302 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:31:57.535311 | orchestrator | 2026-03-08 00:31:57.535318 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-03-08 00:31:57.535324 | orchestrator | Sunday 08 March 2026 00:31:40 +0000 (0:00:00.868) 0:07:36.992 ********** 2026-03-08 00:31:57.535331 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:31:57.535338 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:31:57.535345 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:31:57.535352 | orchestrator | ok: [testbed-manager] 2026-03-08 00:31:57.535358 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:31:57.535366 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:31:57.535373 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:31:57.535380 | orchestrator | 2026-03-08 00:31:57.535387 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-03-08 00:31:57.535394 | orchestrator | Sunday 08 March 2026 00:31:41 +0000 (0:00:00.843) 0:07:37.835 ********** 2026-03-08 00:31:57.535400 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-03-08 00:31:57.535407 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-03-08 00:31:57.535432 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-03-08 00:31:57.535440 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-03-08 00:31:57.535447 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-03-08 00:31:57.535454 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-03-08 00:31:57.535460 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-03-08 00:31:57.535467 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-03-08 00:31:57.535474 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-03-08 00:31:57.535481 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-03-08 00:31:57.535487 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-03-08 00:31:57.535494 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-03-08 00:31:57.535500 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-03-08 00:31:57.535506 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-03-08 00:31:57.535513 | orchestrator | 2026-03-08 00:31:57.535519 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-03-08 00:31:57.535526 | orchestrator | Sunday 08 March 2026 00:31:43 +0000 (0:00:02.683) 0:07:40.519 ********** 2026-03-08 00:31:57.535532 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:31:57.535539 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:31:57.535545 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:31:57.535561 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:31:57.535567 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:31:57.535574 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:31:57.535580 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:31:57.535586 | orchestrator | 2026-03-08 00:31:57.535593 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-03-08 00:31:57.535600 | orchestrator | Sunday 08 March 2026 00:31:44 +0000 (0:00:00.508) 0:07:41.028 ********** 2026-03-08 00:31:57.535609 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:31:57.535618 | orchestrator | 2026-03-08 00:31:57.535625 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-03-08 00:31:57.535632 | orchestrator | Sunday 08 March 2026 00:31:45 +0000 (0:00:00.827) 0:07:41.855 ********** 2026-03-08 00:31:57.535638 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:31:57.535645 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:31:57.535651 | orchestrator | ok: [testbed-manager] 2026-03-08 00:31:57.535658 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:31:57.535665 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:31:57.535671 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:31:57.535678 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:31:57.535685 | orchestrator | 2026-03-08 00:31:57.535692 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-03-08 00:31:57.535698 | orchestrator | Sunday 08 March 2026 00:31:46 +0000 (0:00:01.073) 0:07:42.929 ********** 2026-03-08 00:31:57.535706 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:31:57.535713 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:31:57.535720 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:31:57.535727 | orchestrator | ok: [testbed-manager] 2026-03-08 00:31:57.535734 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:31:57.535739 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:31:57.535745 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:31:57.535751 | orchestrator | 2026-03-08 00:31:57.535756 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-03-08 00:31:57.535763 | orchestrator | Sunday 08 March 2026 00:31:46 +0000 (0:00:00.813) 0:07:43.742 ********** 2026-03-08 00:31:57.535769 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:31:57.535774 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:31:57.535781 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:31:57.535795 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:31:57.535802 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:31:57.535809 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:31:57.535816 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:31:57.535822 | orchestrator | 2026-03-08 00:31:57.535830 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-03-08 00:31:57.535837 | orchestrator | Sunday 08 March 2026 00:31:47 +0000 (0:00:00.521) 0:07:44.264 ********** 2026-03-08 00:31:57.535844 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:31:57.535851 | orchestrator | ok: [testbed-manager] 2026-03-08 00:31:57.535858 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:31:57.535865 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:31:57.535872 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:31:57.535879 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:31:57.535886 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:31:57.535893 | orchestrator | 2026-03-08 00:31:57.535900 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-03-08 00:31:57.535907 | orchestrator | Sunday 08 March 2026 00:31:49 +0000 (0:00:01.526) 0:07:45.791 ********** 2026-03-08 00:31:57.535914 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:31:57.535921 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:31:57.535928 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:31:57.535935 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:31:57.535942 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:31:57.535957 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:31:57.535964 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:31:57.535971 | orchestrator | 2026-03-08 00:31:57.535979 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-03-08 00:31:57.536011 | orchestrator | Sunday 08 March 2026 00:31:49 +0000 (0:00:00.527) 0:07:46.318 ********** 2026-03-08 00:31:57.536017 | orchestrator | ok: [testbed-manager] 2026-03-08 00:31:57.536024 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:31:57.536031 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:31:57.536038 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:31:57.536045 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:31:57.536052 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:31:57.536060 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:31:57.536067 | orchestrator | 2026-03-08 00:31:57.536084 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-03-08 00:32:30.084035 | orchestrator | Sunday 08 March 2026 00:31:57 +0000 (0:00:07.957) 0:07:54.276 ********** 2026-03-08 00:32:30.084143 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:32:30.084159 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:32:30.084169 | orchestrator | ok: [testbed-manager] 2026-03-08 00:32:30.084180 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:32:30.084189 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:32:30.084199 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:32:30.084208 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:32:30.084217 | orchestrator | 2026-03-08 00:32:30.084228 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-03-08 00:32:30.084238 | orchestrator | Sunday 08 March 2026 00:31:58 +0000 (0:00:01.393) 0:07:55.669 ********** 2026-03-08 00:32:30.084247 | orchestrator | ok: [testbed-manager] 2026-03-08 00:32:30.084255 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:32:30.084264 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:32:30.084281 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:32:30.084294 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:32:30.084311 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:32:30.084321 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:32:30.084330 | orchestrator | 2026-03-08 00:32:30.084340 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-03-08 00:32:30.084349 | orchestrator | Sunday 08 March 2026 00:32:00 +0000 (0:00:01.902) 0:07:57.572 ********** 2026-03-08 00:32:30.084358 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:32:30.084366 | orchestrator | ok: [testbed-manager] 2026-03-08 00:32:30.084375 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:32:30.084383 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:32:30.084393 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:32:30.084402 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:32:30.084411 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:32:30.084420 | orchestrator | 2026-03-08 00:32:30.084429 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-08 00:32:30.084439 | orchestrator | Sunday 08 March 2026 00:32:02 +0000 (0:00:01.639) 0:07:59.212 ********** 2026-03-08 00:32:30.084448 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:32:30.084458 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:32:30.084467 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:32:30.084477 | orchestrator | ok: [testbed-manager] 2026-03-08 00:32:30.084486 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:32:30.084494 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:32:30.084503 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:32:30.084512 | orchestrator | 2026-03-08 00:32:30.084521 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-08 00:32:30.084530 | orchestrator | Sunday 08 March 2026 00:32:03 +0000 (0:00:01.073) 0:08:00.286 ********** 2026-03-08 00:32:30.084541 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:32:30.084550 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:32:30.084560 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:32:30.084597 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:32:30.084606 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:32:30.084616 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:32:30.084625 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:32:30.084635 | orchestrator | 2026-03-08 00:32:30.084645 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-03-08 00:32:30.084654 | orchestrator | Sunday 08 March 2026 00:32:04 +0000 (0:00:00.809) 0:08:01.095 ********** 2026-03-08 00:32:30.084663 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:32:30.084673 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:32:30.084683 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:32:30.084692 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:32:30.084700 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:32:30.084710 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:32:30.084720 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:32:30.084729 | orchestrator | 2026-03-08 00:32:30.084738 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-03-08 00:32:30.084748 | orchestrator | Sunday 08 March 2026 00:32:04 +0000 (0:00:00.492) 0:08:01.588 ********** 2026-03-08 00:32:30.084757 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:32:30.084768 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:32:30.084778 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:32:30.084787 | orchestrator | ok: [testbed-manager] 2026-03-08 00:32:30.084797 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:32:30.084808 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:32:30.084819 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:32:30.084829 | orchestrator | 2026-03-08 00:32:30.084838 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-03-08 00:32:30.084848 | orchestrator | Sunday 08 March 2026 00:32:05 +0000 (0:00:00.519) 0:08:02.108 ********** 2026-03-08 00:32:30.084858 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:32:30.084868 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:32:30.084877 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:32:30.084887 | orchestrator | ok: [testbed-manager] 2026-03-08 00:32:30.084897 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:32:30.084908 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:32:30.084918 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:32:30.084929 | orchestrator | 2026-03-08 00:32:30.084969 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-03-08 00:32:30.084980 | orchestrator | Sunday 08 March 2026 00:32:06 +0000 (0:00:00.668) 0:08:02.776 ********** 2026-03-08 00:32:30.084990 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:32:30.085000 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:32:30.085010 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:32:30.085019 | orchestrator | ok: [testbed-manager] 2026-03-08 00:32:30.085029 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:32:30.085038 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:32:30.085048 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:32:30.085057 | orchestrator | 2026-03-08 00:32:30.085066 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-03-08 00:32:30.085076 | orchestrator | Sunday 08 March 2026 00:32:06 +0000 (0:00:00.506) 0:08:03.283 ********** 2026-03-08 00:32:30.085086 | orchestrator | ok: [testbed-manager] 2026-03-08 00:32:30.085096 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:32:30.085107 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:32:30.085116 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:32:30.085126 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:32:30.085136 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:32:30.085145 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:32:30.085155 | orchestrator | 2026-03-08 00:32:30.085164 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-03-08 00:32:30.085198 | orchestrator | Sunday 08 March 2026 00:32:11 +0000 (0:00:05.413) 0:08:08.696 ********** 2026-03-08 00:32:30.085210 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:32:30.085219 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:32:30.085243 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:32:30.085271 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:32:30.085281 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:32:30.085291 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:32:30.085300 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:32:30.085309 | orchestrator | 2026-03-08 00:32:30.085318 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-03-08 00:32:30.085329 | orchestrator | Sunday 08 March 2026 00:32:12 +0000 (0:00:00.509) 0:08:09.206 ********** 2026-03-08 00:32:30.085342 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:32:30.085353 | orchestrator | 2026-03-08 00:32:30.085363 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-03-08 00:32:30.085370 | orchestrator | Sunday 08 March 2026 00:32:13 +0000 (0:00:01.025) 0:08:10.232 ********** 2026-03-08 00:32:30.085376 | orchestrator | ok: [testbed-manager] 2026-03-08 00:32:30.085381 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:32:30.085387 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:32:30.085393 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:32:30.085398 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:32:30.085404 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:32:30.085410 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:32:30.085415 | orchestrator | 2026-03-08 00:32:30.085421 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-03-08 00:32:30.085427 | orchestrator | Sunday 08 March 2026 00:32:15 +0000 (0:00:02.059) 0:08:12.292 ********** 2026-03-08 00:32:30.085432 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:32:30.085438 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:32:30.085443 | orchestrator | ok: [testbed-manager] 2026-03-08 00:32:30.085449 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:32:30.085454 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:32:30.085460 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:32:30.085466 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:32:30.085471 | orchestrator | 2026-03-08 00:32:30.085477 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-03-08 00:32:30.085483 | orchestrator | Sunday 08 March 2026 00:32:16 +0000 (0:00:01.103) 0:08:13.395 ********** 2026-03-08 00:32:30.085489 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:32:30.085494 | orchestrator | ok: [testbed-manager] 2026-03-08 00:32:30.085500 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:32:30.085506 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:32:30.085511 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:32:30.085517 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:32:30.085522 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:32:30.085528 | orchestrator | 2026-03-08 00:32:30.085534 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-03-08 00:32:30.085540 | orchestrator | Sunday 08 March 2026 00:32:17 +0000 (0:00:00.792) 0:08:14.188 ********** 2026-03-08 00:32:30.085549 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-08 00:32:30.085561 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-08 00:32:30.085570 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-08 00:32:30.085584 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-08 00:32:30.085594 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-08 00:32:30.085604 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-08 00:32:30.085620 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-08 00:32:30.085630 | orchestrator | 2026-03-08 00:32:30.085639 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-03-08 00:32:30.085645 | orchestrator | Sunday 08 March 2026 00:32:19 +0000 (0:00:01.948) 0:08:16.136 ********** 2026-03-08 00:32:30.085651 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:32:30.085657 | orchestrator | 2026-03-08 00:32:30.085663 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-03-08 00:32:30.085668 | orchestrator | Sunday 08 March 2026 00:32:20 +0000 (0:00:00.918) 0:08:17.055 ********** 2026-03-08 00:32:30.085674 | orchestrator | changed: [testbed-manager] 2026-03-08 00:32:30.085680 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:32:30.085686 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:32:30.085691 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:32:30.085697 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:32:30.085703 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:32:30.085708 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:32:30.085714 | orchestrator | 2026-03-08 00:32:30.085720 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-03-08 00:32:30.085735 | orchestrator | Sunday 08 March 2026 00:32:30 +0000 (0:00:09.770) 0:08:26.825 ********** 2026-03-08 00:33:01.978110 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:33:01.978234 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:33:01.978252 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:33:01.978265 | orchestrator | ok: [testbed-manager] 2026-03-08 00:33:01.978276 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:33:01.978305 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:33:01.978316 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:33:01.978339 | orchestrator | 2026-03-08 00:33:01.978352 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-03-08 00:33:01.978365 | orchestrator | Sunday 08 March 2026 00:32:31 +0000 (0:00:01.769) 0:08:28.595 ********** 2026-03-08 00:33:01.978376 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:33:01.978387 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:33:01.978478 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:33:01.978493 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:33:01.978506 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:33:01.978519 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:33:01.978532 | orchestrator | 2026-03-08 00:33:01.978545 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-03-08 00:33:01.978557 | orchestrator | Sunday 08 March 2026 00:32:33 +0000 (0:00:01.500) 0:08:30.095 ********** 2026-03-08 00:33:01.978570 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:33:01.978584 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:33:01.978598 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:33:01.978610 | orchestrator | changed: [testbed-manager] 2026-03-08 00:33:01.978623 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:33:01.978636 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:33:01.978648 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:33:01.978661 | orchestrator | 2026-03-08 00:33:01.978674 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-03-08 00:33:01.978686 | orchestrator | 2026-03-08 00:33:01.978700 | orchestrator | TASK [Include hardening role] ************************************************** 2026-03-08 00:33:01.978715 | orchestrator | Sunday 08 March 2026 00:32:34 +0000 (0:00:01.559) 0:08:31.655 ********** 2026-03-08 00:33:01.978735 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:33:01.978753 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:33:01.978801 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:33:01.978823 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:33:01.978842 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:33:01.978861 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:33:01.978880 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:33:01.978977 | orchestrator | 2026-03-08 00:33:01.978991 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-03-08 00:33:01.979003 | orchestrator | 2026-03-08 00:33:01.979014 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-03-08 00:33:01.979025 | orchestrator | Sunday 08 March 2026 00:32:35 +0000 (0:00:00.499) 0:08:32.155 ********** 2026-03-08 00:33:01.979035 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:33:01.979046 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:33:01.979057 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:33:01.979068 | orchestrator | changed: [testbed-manager] 2026-03-08 00:33:01.979080 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:33:01.979090 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:33:01.979101 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:33:01.979112 | orchestrator | 2026-03-08 00:33:01.979123 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-03-08 00:33:01.979134 | orchestrator | Sunday 08 March 2026 00:32:36 +0000 (0:00:01.381) 0:08:33.536 ********** 2026-03-08 00:33:01.979145 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:33:01.979155 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:33:01.979166 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:33:01.979177 | orchestrator | ok: [testbed-manager] 2026-03-08 00:33:01.979187 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:33:01.979198 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:33:01.979209 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:33:01.979219 | orchestrator | 2026-03-08 00:33:01.979230 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-03-08 00:33:01.979241 | orchestrator | Sunday 08 March 2026 00:32:38 +0000 (0:00:01.380) 0:08:34.917 ********** 2026-03-08 00:33:01.979252 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:33:01.979277 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:33:01.979288 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:33:01.979299 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:33:01.979310 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:33:01.979321 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:33:01.979331 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:33:01.979342 | orchestrator | 2026-03-08 00:33:01.979353 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-03-08 00:33:01.979364 | orchestrator | Sunday 08 March 2026 00:32:38 +0000 (0:00:00.671) 0:08:35.588 ********** 2026-03-08 00:33:01.979375 | orchestrator | included: osism.services.smartd for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:33:01.979387 | orchestrator | 2026-03-08 00:33:01.979398 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-03-08 00:33:01.979409 | orchestrator | Sunday 08 March 2026 00:32:39 +0000 (0:00:00.800) 0:08:36.389 ********** 2026-03-08 00:33:01.979421 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:33:01.979435 | orchestrator | 2026-03-08 00:33:01.979446 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-03-08 00:33:01.979457 | orchestrator | Sunday 08 March 2026 00:32:40 +0000 (0:00:00.792) 0:08:37.182 ********** 2026-03-08 00:33:01.979468 | orchestrator | changed: [testbed-manager] 2026-03-08 00:33:01.979478 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:33:01.979489 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:33:01.979500 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:33:01.979521 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:33:01.979532 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:33:01.979542 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:33:01.979553 | orchestrator | 2026-03-08 00:33:01.979564 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-03-08 00:33:01.979597 | orchestrator | Sunday 08 March 2026 00:32:49 +0000 (0:00:09.363) 0:08:46.545 ********** 2026-03-08 00:33:01.979609 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:33:01.979620 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:33:01.979630 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:33:01.979641 | orchestrator | changed: [testbed-manager] 2026-03-08 00:33:01.979652 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:33:01.979662 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:33:01.979673 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:33:01.979684 | orchestrator | 2026-03-08 00:33:01.979695 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-03-08 00:33:01.979706 | orchestrator | Sunday 08 March 2026 00:32:50 +0000 (0:00:00.906) 0:08:47.452 ********** 2026-03-08 00:33:01.979717 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:33:01.979727 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:33:01.979738 | orchestrator | changed: [testbed-manager] 2026-03-08 00:33:01.979748 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:33:01.979759 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:33:01.979770 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:33:01.979780 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:33:01.979791 | orchestrator | 2026-03-08 00:33:01.979802 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-03-08 00:33:01.979813 | orchestrator | Sunday 08 March 2026 00:32:52 +0000 (0:00:01.357) 0:08:48.809 ********** 2026-03-08 00:33:01.979823 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:33:01.979834 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:33:01.979845 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:33:01.979855 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:33:01.979866 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:33:01.979877 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:33:01.979912 | orchestrator | changed: [testbed-manager] 2026-03-08 00:33:01.979924 | orchestrator | 2026-03-08 00:33:01.979935 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-03-08 00:33:01.979946 | orchestrator | Sunday 08 March 2026 00:32:54 +0000 (0:00:02.639) 0:08:51.449 ********** 2026-03-08 00:33:01.979957 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:33:01.979968 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:33:01.979978 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:33:01.980008 | orchestrator | changed: [testbed-manager] 2026-03-08 00:33:01.980019 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:33:01.980030 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:33:01.980040 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:33:01.980051 | orchestrator | 2026-03-08 00:33:01.980062 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-03-08 00:33:01.980073 | orchestrator | Sunday 08 March 2026 00:32:55 +0000 (0:00:01.248) 0:08:52.697 ********** 2026-03-08 00:33:01.980084 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:33:01.980095 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:33:01.980105 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:33:01.980116 | orchestrator | changed: [testbed-manager] 2026-03-08 00:33:01.980127 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:33:01.980138 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:33:01.980148 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:33:01.980159 | orchestrator | 2026-03-08 00:33:01.980170 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-03-08 00:33:01.980181 | orchestrator | 2026-03-08 00:33:01.980191 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-03-08 00:33:01.980202 | orchestrator | Sunday 08 March 2026 00:32:57 +0000 (0:00:01.136) 0:08:53.834 ********** 2026-03-08 00:33:01.980221 | orchestrator | included: osism.commons.state for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:33:01.980231 | orchestrator | 2026-03-08 00:33:01.980242 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-03-08 00:33:01.980253 | orchestrator | Sunday 08 March 2026 00:32:58 +0000 (0:00:00.941) 0:08:54.776 ********** 2026-03-08 00:33:01.980264 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:33:01.980280 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:33:01.980291 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:33:01.980302 | orchestrator | ok: [testbed-manager] 2026-03-08 00:33:01.980313 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:33:01.980324 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:33:01.980335 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:33:01.980345 | orchestrator | 2026-03-08 00:33:01.980356 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-03-08 00:33:01.980367 | orchestrator | Sunday 08 March 2026 00:32:58 +0000 (0:00:00.838) 0:08:55.614 ********** 2026-03-08 00:33:01.980378 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:33:01.980389 | orchestrator | changed: [testbed-manager] 2026-03-08 00:33:01.980400 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:33:01.980411 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:33:01.980421 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:33:01.980432 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:33:01.980442 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:33:01.980453 | orchestrator | 2026-03-08 00:33:01.980464 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-03-08 00:33:01.980475 | orchestrator | Sunday 08 March 2026 00:33:00 +0000 (0:00:01.225) 0:08:56.839 ********** 2026-03-08 00:33:01.980486 | orchestrator | included: osism.commons.state for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:33:01.980497 | orchestrator | 2026-03-08 00:33:01.980507 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-03-08 00:33:01.980518 | orchestrator | Sunday 08 March 2026 00:33:01 +0000 (0:00:01.038) 0:08:57.878 ********** 2026-03-08 00:33:01.980529 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:33:01.980540 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:33:01.980550 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:33:01.980561 | orchestrator | ok: [testbed-manager] 2026-03-08 00:33:01.980572 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:33:01.980582 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:33:01.980593 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:33:01.980604 | orchestrator | 2026-03-08 00:33:01.980615 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-03-08 00:33:01.980633 | orchestrator | Sunday 08 March 2026 00:33:01 +0000 (0:00:00.837) 0:08:58.716 ********** 2026-03-08 00:33:03.494275 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:33:03.494363 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:33:03.494374 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:33:03.494381 | orchestrator | changed: [testbed-manager] 2026-03-08 00:33:03.494387 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:33:03.494394 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:33:03.494401 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:33:03.494408 | orchestrator | 2026-03-08 00:33:03.494416 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:33:03.494425 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-03-08 00:33:03.494433 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-08 00:33:03.494440 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-08 00:33:03.494472 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-08 00:33:03.494479 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=38  rescued=0 ignored=0 2026-03-08 00:33:03.494485 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-08 00:33:03.494491 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-08 00:33:03.494497 | orchestrator | 2026-03-08 00:33:03.494504 | orchestrator | 2026-03-08 00:33:03.494510 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 00:33:03.494516 | orchestrator | Sunday 08 March 2026 00:33:03 +0000 (0:00:01.089) 0:08:59.806 ********** 2026-03-08 00:33:03.494523 | orchestrator | =============================================================================== 2026-03-08 00:33:03.494529 | orchestrator | osism.commons.packages : Install required packages --------------------- 79.85s 2026-03-08 00:33:03.494536 | orchestrator | osism.commons.packages : Download required packages -------------------- 66.32s 2026-03-08 00:33:03.494543 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 36.03s 2026-03-08 00:33:03.494549 | orchestrator | osism.commons.repository : Update package cache ------------------------ 17.19s 2026-03-08 00:33:03.494556 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 13.12s 2026-03-08 00:33:03.494564 | orchestrator | osism.services.docker : Install docker package ------------------------- 12.11s 2026-03-08 00:33:03.494570 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 11.00s 2026-03-08 00:33:03.494576 | orchestrator | osism.services.docker : Install containerd package --------------------- 10.51s 2026-03-08 00:33:03.494582 | orchestrator | osism.services.docker : Install docker-cli package --------------------- 10.15s 2026-03-08 00:33:03.494589 | orchestrator | osism.commons.cleanup : Remove cloudinit package ----------------------- 10.04s 2026-03-08 00:33:03.494596 | orchestrator | osism.services.rng : Install rng package -------------------------------- 9.79s 2026-03-08 00:33:03.494616 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.77s 2026-03-08 00:33:03.494624 | orchestrator | osism.services.docker : Add repository ---------------------------------- 9.39s 2026-03-08 00:33:03.494631 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 9.36s 2026-03-08 00:33:03.494637 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 8.95s 2026-03-08 00:33:03.494643 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.96s 2026-03-08 00:33:03.494649 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 7.65s 2026-03-08 00:33:03.494655 | orchestrator | osism.commons.sysctl : Set sysctl parameters on rabbitmq ---------------- 7.19s 2026-03-08 00:33:03.494662 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 7.04s 2026-03-08 00:33:03.494669 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.61s 2026-03-08 00:33:03.853710 | orchestrator | + osism apply fail2ban 2026-03-08 00:33:16.562629 | orchestrator | 2026-03-08 00:33:16 | INFO  | Prepare task for execution of fail2ban. 2026-03-08 00:33:16.646007 | orchestrator | 2026-03-08 00:33:16 | INFO  | Task d794b69f-d736-4b2c-91a7-8c398cd5cf54 (fail2ban) was prepared for execution. 2026-03-08 00:33:16.646152 | orchestrator | 2026-03-08 00:33:16 | INFO  | It takes a moment until task d794b69f-d736-4b2c-91a7-8c398cd5cf54 (fail2ban) has been started and output is visible here. 2026-03-08 00:33:38.951355 | orchestrator | 2026-03-08 00:33:38.951463 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-03-08 00:33:38.951507 | orchestrator | 2026-03-08 00:33:38.951520 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-03-08 00:33:38.951532 | orchestrator | Sunday 08 March 2026 00:33:21 +0000 (0:00:00.262) 0:00:00.262 ********** 2026-03-08 00:33:38.951544 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 00:33:38.951557 | orchestrator | 2026-03-08 00:33:38.951568 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-03-08 00:33:38.951579 | orchestrator | Sunday 08 March 2026 00:33:22 +0000 (0:00:01.120) 0:00:01.383 ********** 2026-03-08 00:33:38.951589 | orchestrator | changed: [testbed-manager] 2026-03-08 00:33:38.951601 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:33:38.951612 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:33:38.951622 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:33:38.951633 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:33:38.951643 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:33:38.951653 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:33:38.951664 | orchestrator | 2026-03-08 00:33:38.951675 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-03-08 00:33:38.951685 | orchestrator | Sunday 08 March 2026 00:33:33 +0000 (0:00:11.437) 0:00:12.820 ********** 2026-03-08 00:33:38.951696 | orchestrator | changed: [testbed-manager] 2026-03-08 00:33:38.951706 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:33:38.951717 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:33:38.951727 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:33:38.951737 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:33:38.951748 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:33:38.951758 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:33:38.951769 | orchestrator | 2026-03-08 00:33:38.951780 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-03-08 00:33:38.951790 | orchestrator | Sunday 08 March 2026 00:33:35 +0000 (0:00:01.649) 0:00:14.470 ********** 2026-03-08 00:33:38.951801 | orchestrator | ok: [testbed-manager] 2026-03-08 00:33:38.951813 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:33:38.951823 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:33:38.951880 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:33:38.951891 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:33:38.951902 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:33:38.951912 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:33:38.951922 | orchestrator | 2026-03-08 00:33:38.951933 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-03-08 00:33:38.951944 | orchestrator | Sunday 08 March 2026 00:33:37 +0000 (0:00:01.625) 0:00:16.096 ********** 2026-03-08 00:33:38.951954 | orchestrator | changed: [testbed-manager] 2026-03-08 00:33:38.951965 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:33:38.951977 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:33:38.951987 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:33:38.951998 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:33:38.952009 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:33:38.952019 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:33:38.952030 | orchestrator | 2026-03-08 00:33:38.952041 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:33:38.952052 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:33:38.952063 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:33:38.952074 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:33:38.952084 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:33:38.952119 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:33:38.952130 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:33:38.952142 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:33:38.952152 | orchestrator | 2026-03-08 00:33:38.952163 | orchestrator | 2026-03-08 00:33:38.952174 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 00:33:38.952185 | orchestrator | Sunday 08 March 2026 00:33:38 +0000 (0:00:01.605) 0:00:17.701 ********** 2026-03-08 00:33:38.952195 | orchestrator | =============================================================================== 2026-03-08 00:33:38.952206 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 11.44s 2026-03-08 00:33:38.952216 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.65s 2026-03-08 00:33:38.952227 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.63s 2026-03-08 00:33:38.952238 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.61s 2026-03-08 00:33:38.952249 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.12s 2026-03-08 00:33:39.268440 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-03-08 00:33:39.268530 | orchestrator | + osism apply network 2026-03-08 00:33:51.410758 | orchestrator | 2026-03-08 00:33:51 | INFO  | Prepare task for execution of network. 2026-03-08 00:33:51.479000 | orchestrator | 2026-03-08 00:33:51 | INFO  | Task 35e0587e-e84a-461e-8ec2-7c2d9c184e23 (network) was prepared for execution. 2026-03-08 00:33:51.479074 | orchestrator | 2026-03-08 00:33:51 | INFO  | It takes a moment until task 35e0587e-e84a-461e-8ec2-7c2d9c184e23 (network) has been started and output is visible here. 2026-03-08 00:34:19.979478 | orchestrator | 2026-03-08 00:34:19.979591 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-03-08 00:34:19.979608 | orchestrator | 2026-03-08 00:34:19.979622 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-03-08 00:34:19.979634 | orchestrator | Sunday 08 March 2026 00:33:55 +0000 (0:00:00.189) 0:00:00.189 ********** 2026-03-08 00:34:19.979645 | orchestrator | ok: [testbed-manager] 2026-03-08 00:34:19.979657 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:34:19.979668 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:34:19.979679 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:34:19.979689 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:34:19.979700 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:34:19.979711 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:34:19.979721 | orchestrator | 2026-03-08 00:34:19.979732 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-03-08 00:34:19.979743 | orchestrator | Sunday 08 March 2026 00:33:56 +0000 (0:00:00.520) 0:00:00.709 ********** 2026-03-08 00:34:19.979755 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 00:34:19.979813 | orchestrator | 2026-03-08 00:34:19.979825 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-03-08 00:34:19.979836 | orchestrator | Sunday 08 March 2026 00:33:57 +0000 (0:00:01.029) 0:00:01.739 ********** 2026-03-08 00:34:19.979847 | orchestrator | ok: [testbed-manager] 2026-03-08 00:34:19.979857 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:34:19.979868 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:34:19.979879 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:34:19.979889 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:34:19.979925 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:34:19.979937 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:34:19.979947 | orchestrator | 2026-03-08 00:34:19.979958 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-03-08 00:34:19.979969 | orchestrator | Sunday 08 March 2026 00:33:59 +0000 (0:00:02.094) 0:00:03.834 ********** 2026-03-08 00:34:19.979980 | orchestrator | ok: [testbed-manager] 2026-03-08 00:34:19.979990 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:34:19.980001 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:34:19.980011 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:34:19.980024 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:34:19.980036 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:34:19.980049 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:34:19.980061 | orchestrator | 2026-03-08 00:34:19.980073 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-03-08 00:34:19.980086 | orchestrator | Sunday 08 March 2026 00:34:01 +0000 (0:00:01.821) 0:00:05.656 ********** 2026-03-08 00:34:19.980099 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-03-08 00:34:19.980112 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-03-08 00:34:19.980124 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-03-08 00:34:19.980145 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-03-08 00:34:19.980163 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-03-08 00:34:19.980181 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-03-08 00:34:19.980201 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-03-08 00:34:19.980221 | orchestrator | 2026-03-08 00:34:19.980240 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-03-08 00:34:19.980260 | orchestrator | Sunday 08 March 2026 00:34:02 +0000 (0:00:00.977) 0:00:06.633 ********** 2026-03-08 00:34:19.980279 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-08 00:34:19.980300 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-08 00:34:19.980321 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-08 00:34:19.980340 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-08 00:34:19.980358 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-08 00:34:19.980369 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-08 00:34:19.980380 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-08 00:34:19.980391 | orchestrator | 2026-03-08 00:34:19.980402 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-03-08 00:34:19.980413 | orchestrator | Sunday 08 March 2026 00:34:05 +0000 (0:00:03.601) 0:00:10.234 ********** 2026-03-08 00:34:19.980424 | orchestrator | changed: [testbed-manager] 2026-03-08 00:34:19.980436 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:34:19.980487 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:34:19.980505 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:34:19.980522 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:34:19.980542 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:34:19.980560 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:34:19.980578 | orchestrator | 2026-03-08 00:34:19.980598 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-03-08 00:34:19.980617 | orchestrator | Sunday 08 March 2026 00:34:07 +0000 (0:00:01.703) 0:00:11.938 ********** 2026-03-08 00:34:19.980636 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-08 00:34:19.980649 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-08 00:34:19.980660 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-08 00:34:19.980670 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-08 00:34:19.980700 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-08 00:34:19.980711 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-08 00:34:19.980722 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-08 00:34:19.980732 | orchestrator | 2026-03-08 00:34:19.980743 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-03-08 00:34:19.980754 | orchestrator | Sunday 08 March 2026 00:34:09 +0000 (0:00:01.819) 0:00:13.757 ********** 2026-03-08 00:34:19.980821 | orchestrator | ok: [testbed-manager] 2026-03-08 00:34:19.980833 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:34:19.980843 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:34:19.980854 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:34:19.980865 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:34:19.980875 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:34:19.980886 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:34:19.980896 | orchestrator | 2026-03-08 00:34:19.980907 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-03-08 00:34:19.980939 | orchestrator | Sunday 08 March 2026 00:34:10 +0000 (0:00:01.139) 0:00:14.897 ********** 2026-03-08 00:34:19.980951 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:34:19.980962 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:34:19.980972 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:34:19.980983 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:34:19.980993 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:34:19.981004 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:34:19.981014 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:34:19.981024 | orchestrator | 2026-03-08 00:34:19.981035 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-03-08 00:34:19.981046 | orchestrator | Sunday 08 March 2026 00:34:10 +0000 (0:00:00.657) 0:00:15.555 ********** 2026-03-08 00:34:19.981057 | orchestrator | ok: [testbed-manager] 2026-03-08 00:34:19.981067 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:34:19.981078 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:34:19.981088 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:34:19.981099 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:34:19.981109 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:34:19.981120 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:34:19.981130 | orchestrator | 2026-03-08 00:34:19.981141 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-03-08 00:34:19.981151 | orchestrator | Sunday 08 March 2026 00:34:13 +0000 (0:00:02.218) 0:00:17.773 ********** 2026-03-08 00:34:19.981162 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:34:19.981173 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:34:19.981183 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:34:19.981194 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:34:19.981204 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:34:19.981214 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:34:19.981226 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2026-03-08 00:34:19.981238 | orchestrator | 2026-03-08 00:34:19.981249 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-03-08 00:34:19.981259 | orchestrator | Sunday 08 March 2026 00:34:14 +0000 (0:00:00.902) 0:00:18.676 ********** 2026-03-08 00:34:19.981270 | orchestrator | ok: [testbed-manager] 2026-03-08 00:34:19.981280 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:34:19.981291 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:34:19.981301 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:34:19.981311 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:34:19.981322 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:34:19.981332 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:34:19.981343 | orchestrator | 2026-03-08 00:34:19.981353 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-03-08 00:34:19.981364 | orchestrator | Sunday 08 March 2026 00:34:15 +0000 (0:00:01.668) 0:00:20.344 ********** 2026-03-08 00:34:19.981375 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 00:34:19.981388 | orchestrator | 2026-03-08 00:34:19.981398 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-03-08 00:34:19.981409 | orchestrator | Sunday 08 March 2026 00:34:16 +0000 (0:00:01.259) 0:00:21.604 ********** 2026-03-08 00:34:19.981426 | orchestrator | ok: [testbed-manager] 2026-03-08 00:34:19.981437 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:34:19.981447 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:34:19.981458 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:34:19.981468 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:34:19.981479 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:34:19.981489 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:34:19.981500 | orchestrator | 2026-03-08 00:34:19.981511 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-03-08 00:34:19.981521 | orchestrator | Sunday 08 March 2026 00:34:17 +0000 (0:00:00.941) 0:00:22.545 ********** 2026-03-08 00:34:19.981532 | orchestrator | ok: [testbed-manager] 2026-03-08 00:34:19.981548 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:34:19.981559 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:34:19.981569 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:34:19.981580 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:34:19.981590 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:34:19.981600 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:34:19.981611 | orchestrator | 2026-03-08 00:34:19.981621 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-03-08 00:34:19.981632 | orchestrator | Sunday 08 March 2026 00:34:18 +0000 (0:00:00.807) 0:00:23.353 ********** 2026-03-08 00:34:19.981643 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-03-08 00:34:19.981653 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-03-08 00:34:19.981664 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-03-08 00:34:19.981674 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-03-08 00:34:19.981685 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-08 00:34:19.981695 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-03-08 00:34:19.981706 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-08 00:34:19.981716 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-03-08 00:34:19.981727 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-08 00:34:19.981737 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-08 00:34:19.981747 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-08 00:34:19.981758 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-03-08 00:34:19.981787 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-08 00:34:19.981798 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-08 00:34:19.981809 | orchestrator | 2026-03-08 00:34:19.981827 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-03-08 00:34:35.517963 | orchestrator | Sunday 08 March 2026 00:34:19 +0000 (0:00:01.240) 0:00:24.593 ********** 2026-03-08 00:34:35.518172 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:34:35.518203 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:34:35.519042 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:34:35.519071 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:34:35.519082 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:34:35.519093 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:34:35.519104 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:34:35.519115 | orchestrator | 2026-03-08 00:34:35.519127 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-03-08 00:34:35.519139 | orchestrator | Sunday 08 March 2026 00:34:20 +0000 (0:00:00.645) 0:00:25.239 ********** 2026-03-08 00:34:35.519152 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-5, testbed-node-3, testbed-node-2, testbed-node-4 2026-03-08 00:34:35.519190 | orchestrator | 2026-03-08 00:34:35.519202 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-03-08 00:34:35.519213 | orchestrator | Sunday 08 March 2026 00:34:25 +0000 (0:00:04.387) 0:00:29.626 ********** 2026-03-08 00:34:35.519226 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-03-08 00:34:35.519240 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-03-08 00:34:35.519251 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-03-08 00:34:35.519263 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-03-08 00:34:35.519274 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-03-08 00:34:35.519285 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-03-08 00:34:35.519319 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-03-08 00:34:35.519332 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-03-08 00:34:35.519343 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-03-08 00:34:35.519354 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-03-08 00:34:35.519372 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-03-08 00:34:35.519406 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-03-08 00:34:35.519418 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-03-08 00:34:35.519438 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-03-08 00:34:35.519450 | orchestrator | 2026-03-08 00:34:35.519461 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-03-08 00:34:35.519472 | orchestrator | Sunday 08 March 2026 00:34:30 +0000 (0:00:05.217) 0:00:34.843 ********** 2026-03-08 00:34:35.519483 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-03-08 00:34:35.519494 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-03-08 00:34:35.519505 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-03-08 00:34:35.519516 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-03-08 00:34:35.519527 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-03-08 00:34:35.519538 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-03-08 00:34:35.519554 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-03-08 00:34:35.519566 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-03-08 00:34:35.519577 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-03-08 00:34:35.519588 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-03-08 00:34:35.519599 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-03-08 00:34:35.519610 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-03-08 00:34:35.519640 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-03-08 00:34:48.285350 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-03-08 00:34:48.285467 | orchestrator | 2026-03-08 00:34:48.285485 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-03-08 00:34:48.285498 | orchestrator | Sunday 08 March 2026 00:34:35 +0000 (0:00:05.446) 0:00:40.290 ********** 2026-03-08 00:34:48.285511 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 00:34:48.285523 | orchestrator | 2026-03-08 00:34:48.285534 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-03-08 00:34:48.285546 | orchestrator | Sunday 08 March 2026 00:34:36 +0000 (0:00:01.150) 0:00:41.440 ********** 2026-03-08 00:34:48.285557 | orchestrator | ok: [testbed-manager] 2026-03-08 00:34:48.285571 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:34:48.285582 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:34:48.285593 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:34:48.285605 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:34:48.285616 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:34:48.285627 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:34:48.285638 | orchestrator | 2026-03-08 00:34:48.285649 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-03-08 00:34:48.285660 | orchestrator | Sunday 08 March 2026 00:34:38 +0000 (0:00:01.806) 0:00:43.247 ********** 2026-03-08 00:34:48.285672 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-08 00:34:48.285684 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-08 00:34:48.285695 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-08 00:34:48.285706 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-08 00:34:48.285717 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-08 00:34:48.285761 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-08 00:34:48.285773 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-08 00:34:48.285783 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-08 00:34:48.285794 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:34:48.285806 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-08 00:34:48.285817 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-08 00:34:48.285828 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-08 00:34:48.285839 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-08 00:34:48.285849 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:34:48.285860 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-08 00:34:48.285888 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-08 00:34:48.285902 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-08 00:34:48.285915 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-08 00:34:48.285950 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:34:48.285963 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-08 00:34:48.285975 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-08 00:34:48.285987 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-08 00:34:48.286000 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-08 00:34:48.286012 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:34:48.286079 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-08 00:34:48.286090 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-08 00:34:48.286101 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-08 00:34:48.286112 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-08 00:34:48.286123 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:34:48.286133 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:34:48.286144 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-08 00:34:48.286155 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-08 00:34:48.286165 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-08 00:34:48.286176 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-08 00:34:48.286220 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:34:48.286232 | orchestrator | 2026-03-08 00:34:48.286244 | orchestrator | TASK [osism.commons.network : Include network extra init] ********************** 2026-03-08 00:34:48.286273 | orchestrator | Sunday 08 March 2026 00:34:39 +0000 (0:00:00.811) 0:00:44.058 ********** 2026-03-08 00:34:48.286285 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/network-extra-init.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 00:34:48.286297 | orchestrator | 2026-03-08 00:34:48.286308 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init script] **************** 2026-03-08 00:34:48.286319 | orchestrator | Sunday 08 March 2026 00:34:40 +0000 (0:00:01.025) 0:00:45.084 ********** 2026-03-08 00:34:48.286330 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:34:48.286340 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:34:48.286351 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:34:48.286362 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:34:48.286373 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:34:48.286383 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:34:48.286394 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:34:48.286405 | orchestrator | 2026-03-08 00:34:48.286416 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init systemd service] ******* 2026-03-08 00:34:48.286427 | orchestrator | Sunday 08 March 2026 00:34:40 +0000 (0:00:00.479) 0:00:45.564 ********** 2026-03-08 00:34:48.286437 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:34:48.286448 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:34:48.286459 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:34:48.286469 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:34:48.286480 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:34:48.286491 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:34:48.286502 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:34:48.286512 | orchestrator | 2026-03-08 00:34:48.286523 | orchestrator | TASK [osism.commons.network : Enable and start network-extra-init service] ***** 2026-03-08 00:34:48.286534 | orchestrator | Sunday 08 March 2026 00:34:41 +0000 (0:00:00.667) 0:00:46.231 ********** 2026-03-08 00:34:48.286545 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:34:48.286566 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:34:48.286576 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:34:48.286587 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:34:48.286598 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:34:48.286608 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:34:48.286619 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:34:48.286630 | orchestrator | 2026-03-08 00:34:48.286641 | orchestrator | TASK [osism.commons.network : Disable and stop network-extra-init service] ***** 2026-03-08 00:34:48.286652 | orchestrator | Sunday 08 March 2026 00:34:42 +0000 (0:00:00.536) 0:00:46.768 ********** 2026-03-08 00:34:48.286662 | orchestrator | ok: [testbed-manager] 2026-03-08 00:34:48.286673 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:34:48.286684 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:34:48.286695 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:34:48.286706 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:34:48.286716 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:34:48.286758 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:34:48.286770 | orchestrator | 2026-03-08 00:34:48.286780 | orchestrator | TASK [osism.commons.network : Remove network-extra-init systemd service] ******* 2026-03-08 00:34:48.286791 | orchestrator | Sunday 08 March 2026 00:34:43 +0000 (0:00:01.555) 0:00:48.323 ********** 2026-03-08 00:34:48.286802 | orchestrator | ok: [testbed-manager] 2026-03-08 00:34:48.286813 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:34:48.286824 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:34:48.286835 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:34:48.286845 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:34:48.286856 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:34:48.286867 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:34:48.286885 | orchestrator | 2026-03-08 00:34:48.286904 | orchestrator | TASK [osism.commons.network : Remove network-extra-init script] **************** 2026-03-08 00:34:48.286934 | orchestrator | Sunday 08 March 2026 00:34:44 +0000 (0:00:00.987) 0:00:49.311 ********** 2026-03-08 00:34:48.286953 | orchestrator | ok: [testbed-manager] 2026-03-08 00:34:48.286971 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:34:48.286988 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:34:48.287005 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:34:48.287023 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:34:48.287041 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:34:48.287060 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:34:48.287079 | orchestrator | 2026-03-08 00:34:48.287097 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-03-08 00:34:48.287114 | orchestrator | Sunday 08 March 2026 00:34:46 +0000 (0:00:02.205) 0:00:51.516 ********** 2026-03-08 00:34:48.287124 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:34:48.287135 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:34:48.287145 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:34:48.287156 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:34:48.287167 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:34:48.287177 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:34:48.287188 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:34:48.287199 | orchestrator | 2026-03-08 00:34:48.287209 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-03-08 00:34:48.287220 | orchestrator | Sunday 08 March 2026 00:34:47 +0000 (0:00:00.791) 0:00:52.308 ********** 2026-03-08 00:34:48.287231 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:34:48.287241 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:34:48.287252 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:34:48.287262 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:34:48.287273 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:34:48.287283 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:34:48.287294 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:34:48.287304 | orchestrator | 2026-03-08 00:34:48.287315 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:34:48.287327 | orchestrator | testbed-manager : ok=25  changed=5  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-08 00:34:48.287353 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-08 00:34:48.287375 | orchestrator | testbed-node-1 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-08 00:34:48.689454 | orchestrator | testbed-node-2 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-08 00:34:48.689587 | orchestrator | testbed-node-3 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-08 00:34:48.689602 | orchestrator | testbed-node-4 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-08 00:34:48.689614 | orchestrator | testbed-node-5 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-08 00:34:48.689625 | orchestrator | 2026-03-08 00:34:48.689637 | orchestrator | 2026-03-08 00:34:48.689648 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 00:34:48.689661 | orchestrator | Sunday 08 March 2026 00:34:48 +0000 (0:00:00.590) 0:00:52.899 ********** 2026-03-08 00:34:48.689672 | orchestrator | =============================================================================== 2026-03-08 00:34:48.689682 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.45s 2026-03-08 00:34:48.689693 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.22s 2026-03-08 00:34:48.689704 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.39s 2026-03-08 00:34:48.689714 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.60s 2026-03-08 00:34:48.689820 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.22s 2026-03-08 00:34:48.689832 | orchestrator | osism.commons.network : Remove network-extra-init script ---------------- 2.21s 2026-03-08 00:34:48.689843 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.09s 2026-03-08 00:34:48.689854 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.82s 2026-03-08 00:34:48.689865 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.82s 2026-03-08 00:34:48.689876 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.81s 2026-03-08 00:34:48.689887 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.70s 2026-03-08 00:34:48.689897 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.67s 2026-03-08 00:34:48.689908 | orchestrator | osism.commons.network : Disable and stop network-extra-init service ----- 1.56s 2026-03-08 00:34:48.689919 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.26s 2026-03-08 00:34:48.689930 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.24s 2026-03-08 00:34:48.689940 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.15s 2026-03-08 00:34:48.689951 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.14s 2026-03-08 00:34:48.689962 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.03s 2026-03-08 00:34:48.689973 | orchestrator | osism.commons.network : Include network extra init ---------------------- 1.03s 2026-03-08 00:34:48.689984 | orchestrator | osism.commons.network : Remove network-extra-init systemd service ------- 0.99s 2026-03-08 00:34:49.037957 | orchestrator | + osism apply wireguard 2026-03-08 00:35:01.257769 | orchestrator | 2026-03-08 00:35:01 | INFO  | Prepare task for execution of wireguard. 2026-03-08 00:35:01.359872 | orchestrator | 2026-03-08 00:35:01 | INFO  | Task f24b69f3-5709-48d6-877f-91c917bd19b5 (wireguard) was prepared for execution. 2026-03-08 00:35:01.360006 | orchestrator | 2026-03-08 00:35:01 | INFO  | It takes a moment until task f24b69f3-5709-48d6-877f-91c917bd19b5 (wireguard) has been started and output is visible here. 2026-03-08 00:35:19.527488 | orchestrator | 2026-03-08 00:35:19.527611 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-03-08 00:35:19.527629 | orchestrator | 2026-03-08 00:35:19.527642 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-03-08 00:35:19.527654 | orchestrator | Sunday 08 March 2026 00:35:05 +0000 (0:00:00.218) 0:00:00.219 ********** 2026-03-08 00:35:19.527664 | orchestrator | ok: [testbed-manager] 2026-03-08 00:35:19.527733 | orchestrator | 2026-03-08 00:35:19.527745 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-03-08 00:35:19.527756 | orchestrator | Sunday 08 March 2026 00:35:06 +0000 (0:00:01.193) 0:00:01.412 ********** 2026-03-08 00:35:19.527767 | orchestrator | changed: [testbed-manager] 2026-03-08 00:35:19.527779 | orchestrator | 2026-03-08 00:35:19.527789 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-03-08 00:35:19.527800 | orchestrator | Sunday 08 March 2026 00:35:12 +0000 (0:00:05.631) 0:00:07.043 ********** 2026-03-08 00:35:19.527811 | orchestrator | changed: [testbed-manager] 2026-03-08 00:35:19.527822 | orchestrator | 2026-03-08 00:35:19.527832 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-03-08 00:35:19.527843 | orchestrator | Sunday 08 March 2026 00:35:12 +0000 (0:00:00.511) 0:00:07.555 ********** 2026-03-08 00:35:19.527854 | orchestrator | changed: [testbed-manager] 2026-03-08 00:35:19.527865 | orchestrator | 2026-03-08 00:35:19.527875 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-03-08 00:35:19.527886 | orchestrator | Sunday 08 March 2026 00:35:13 +0000 (0:00:00.399) 0:00:07.955 ********** 2026-03-08 00:35:19.527897 | orchestrator | ok: [testbed-manager] 2026-03-08 00:35:19.527908 | orchestrator | 2026-03-08 00:35:19.527918 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-03-08 00:35:19.527929 | orchestrator | Sunday 08 March 2026 00:35:13 +0000 (0:00:00.548) 0:00:08.503 ********** 2026-03-08 00:35:19.527940 | orchestrator | ok: [testbed-manager] 2026-03-08 00:35:19.527951 | orchestrator | 2026-03-08 00:35:19.527961 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-03-08 00:35:19.527972 | orchestrator | Sunday 08 March 2026 00:35:14 +0000 (0:00:00.370) 0:00:08.874 ********** 2026-03-08 00:35:19.527983 | orchestrator | ok: [testbed-manager] 2026-03-08 00:35:19.527993 | orchestrator | 2026-03-08 00:35:19.528004 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-03-08 00:35:19.528015 | orchestrator | Sunday 08 March 2026 00:35:14 +0000 (0:00:00.428) 0:00:09.303 ********** 2026-03-08 00:35:19.528025 | orchestrator | changed: [testbed-manager] 2026-03-08 00:35:19.528039 | orchestrator | 2026-03-08 00:35:19.528051 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-03-08 00:35:19.528064 | orchestrator | Sunday 08 March 2026 00:35:15 +0000 (0:00:01.170) 0:00:10.473 ********** 2026-03-08 00:35:19.528078 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-08 00:35:19.528091 | orchestrator | changed: [testbed-manager] 2026-03-08 00:35:19.528103 | orchestrator | 2026-03-08 00:35:19.528115 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-03-08 00:35:19.528128 | orchestrator | Sunday 08 March 2026 00:35:16 +0000 (0:00:00.958) 0:00:11.431 ********** 2026-03-08 00:35:19.528141 | orchestrator | changed: [testbed-manager] 2026-03-08 00:35:19.528153 | orchestrator | 2026-03-08 00:35:19.528166 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-03-08 00:35:19.528179 | orchestrator | Sunday 08 March 2026 00:35:18 +0000 (0:00:01.615) 0:00:13.047 ********** 2026-03-08 00:35:19.528192 | orchestrator | changed: [testbed-manager] 2026-03-08 00:35:19.528204 | orchestrator | 2026-03-08 00:35:19.528216 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:35:19.528285 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:35:19.528301 | orchestrator | 2026-03-08 00:35:19.528315 | orchestrator | 2026-03-08 00:35:19.528327 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 00:35:19.528339 | orchestrator | Sunday 08 March 2026 00:35:19 +0000 (0:00:00.832) 0:00:13.879 ********** 2026-03-08 00:35:19.528352 | orchestrator | =============================================================================== 2026-03-08 00:35:19.528365 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 5.63s 2026-03-08 00:35:19.528378 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.62s 2026-03-08 00:35:19.528391 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.19s 2026-03-08 00:35:19.528401 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.17s 2026-03-08 00:35:19.528412 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.96s 2026-03-08 00:35:19.528423 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.83s 2026-03-08 00:35:19.528433 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.55s 2026-03-08 00:35:19.528444 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.51s 2026-03-08 00:35:19.528454 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.43s 2026-03-08 00:35:19.528470 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.40s 2026-03-08 00:35:19.528481 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.37s 2026-03-08 00:35:19.810227 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-03-08 00:35:19.844724 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-03-08 00:35:19.844842 | orchestrator | Dload Upload Total Spent Left Speed 2026-03-08 00:35:19.924883 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 186 0 --:--:-- --:--:-- --:--:-- 187 2026-03-08 00:35:19.935977 | orchestrator | + osism apply --environment custom workarounds 2026-03-08 00:35:21.937067 | orchestrator | 2026-03-08 00:35:21 | INFO  | Trying to run play workarounds in environment custom 2026-03-08 00:35:31.945466 | orchestrator | 2026-03-08 00:35:31 | INFO  | Prepare task for execution of workarounds. 2026-03-08 00:35:32.016449 | orchestrator | 2026-03-08 00:35:32 | INFO  | Task a749ea26-6bec-4e16-b546-0b90aa233330 (workarounds) was prepared for execution. 2026-03-08 00:35:32.016532 | orchestrator | 2026-03-08 00:35:32 | INFO  | It takes a moment until task a749ea26-6bec-4e16-b546-0b90aa233330 (workarounds) has been started and output is visible here. 2026-03-08 00:35:54.755359 | orchestrator | 2026-03-08 00:35:54.755497 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-08 00:35:54.755526 | orchestrator | 2026-03-08 00:35:54.755547 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-03-08 00:35:54.755569 | orchestrator | Sunday 08 March 2026 00:35:35 +0000 (0:00:00.095) 0:00:00.095 ********** 2026-03-08 00:35:54.755589 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-03-08 00:35:54.755611 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-03-08 00:35:54.755730 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-03-08 00:35:54.755748 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-03-08 00:35:54.755767 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-03-08 00:35:54.755786 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-03-08 00:35:54.755806 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-03-08 00:35:54.755862 | orchestrator | 2026-03-08 00:35:54.755883 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-03-08 00:35:54.755901 | orchestrator | 2026-03-08 00:35:54.755920 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-03-08 00:35:54.755940 | orchestrator | Sunday 08 March 2026 00:35:35 +0000 (0:00:00.564) 0:00:00.660 ********** 2026-03-08 00:35:54.755960 | orchestrator | ok: [testbed-manager] 2026-03-08 00:35:54.755982 | orchestrator | 2026-03-08 00:35:54.756001 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-03-08 00:35:54.756021 | orchestrator | 2026-03-08 00:35:54.756039 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-03-08 00:35:54.756057 | orchestrator | Sunday 08 March 2026 00:35:37 +0000 (0:00:01.962) 0:00:02.622 ********** 2026-03-08 00:35:54.756076 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:35:54.756095 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:35:54.756114 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:35:54.756131 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:35:54.756148 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:35:54.756165 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:35:54.756182 | orchestrator | 2026-03-08 00:35:54.756200 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-03-08 00:35:54.756218 | orchestrator | 2026-03-08 00:35:54.756231 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-03-08 00:35:54.756243 | orchestrator | Sunday 08 March 2026 00:35:39 +0000 (0:00:01.654) 0:00:04.276 ********** 2026-03-08 00:35:54.756255 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-08 00:35:54.756266 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-08 00:35:54.756276 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-08 00:35:54.756286 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-08 00:35:54.756295 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-08 00:35:54.756305 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-08 00:35:54.756314 | orchestrator | 2026-03-08 00:35:54.756324 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-03-08 00:35:54.756333 | orchestrator | Sunday 08 March 2026 00:35:40 +0000 (0:00:01.298) 0:00:05.575 ********** 2026-03-08 00:35:54.756344 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:35:54.756353 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:35:54.756363 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:35:54.756372 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:35:54.756382 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:35:54.756391 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:35:54.756401 | orchestrator | 2026-03-08 00:35:54.756410 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-03-08 00:35:54.756420 | orchestrator | Sunday 08 March 2026 00:35:44 +0000 (0:00:03.665) 0:00:09.241 ********** 2026-03-08 00:35:54.756429 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:35:54.756454 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:35:54.756464 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:35:54.756473 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:35:54.756482 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:35:54.756492 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:35:54.756501 | orchestrator | 2026-03-08 00:35:54.756511 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-03-08 00:35:54.756520 | orchestrator | 2026-03-08 00:35:54.756530 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-03-08 00:35:54.756539 | orchestrator | Sunday 08 March 2026 00:35:45 +0000 (0:00:00.652) 0:00:09.893 ********** 2026-03-08 00:35:54.756558 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:35:54.756568 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:35:54.756578 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:35:54.756587 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:35:54.756596 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:35:54.756606 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:35:54.756643 | orchestrator | changed: [testbed-manager] 2026-03-08 00:35:54.756659 | orchestrator | 2026-03-08 00:35:54.756674 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-03-08 00:35:54.756684 | orchestrator | Sunday 08 March 2026 00:35:46 +0000 (0:00:01.512) 0:00:11.406 ********** 2026-03-08 00:35:54.756693 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:35:54.756703 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:35:54.756712 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:35:54.756722 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:35:54.756732 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:35:54.756741 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:35:54.756773 | orchestrator | changed: [testbed-manager] 2026-03-08 00:35:54.756783 | orchestrator | 2026-03-08 00:35:54.756793 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-03-08 00:35:54.756803 | orchestrator | Sunday 08 March 2026 00:35:48 +0000 (0:00:01.548) 0:00:12.954 ********** 2026-03-08 00:35:54.756818 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:35:54.756834 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:35:54.756850 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:35:54.756866 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:35:54.756882 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:35:54.756897 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:35:54.756913 | orchestrator | ok: [testbed-manager] 2026-03-08 00:35:54.756929 | orchestrator | 2026-03-08 00:35:54.756942 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-03-08 00:35:54.756956 | orchestrator | Sunday 08 March 2026 00:35:49 +0000 (0:00:01.522) 0:00:14.477 ********** 2026-03-08 00:35:54.756971 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:35:54.756985 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:35:54.757001 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:35:54.757018 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:35:54.757035 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:35:54.757052 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:35:54.757068 | orchestrator | changed: [testbed-manager] 2026-03-08 00:35:54.757080 | orchestrator | 2026-03-08 00:35:54.757090 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-03-08 00:35:54.757099 | orchestrator | Sunday 08 March 2026 00:35:51 +0000 (0:00:01.815) 0:00:16.293 ********** 2026-03-08 00:35:54.757109 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:35:54.757118 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:35:54.757128 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:35:54.757137 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:35:54.757146 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:35:54.757156 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:35:54.757165 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:35:54.757182 | orchestrator | 2026-03-08 00:35:54.757198 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-03-08 00:35:54.757214 | orchestrator | 2026-03-08 00:35:54.757230 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-03-08 00:35:54.757247 | orchestrator | Sunday 08 March 2026 00:35:51 +0000 (0:00:00.569) 0:00:16.862 ********** 2026-03-08 00:35:54.757263 | orchestrator | ok: [testbed-manager] 2026-03-08 00:35:54.757278 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:35:54.757296 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:35:54.757311 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:35:54.757329 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:35:54.757345 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:35:54.757372 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:35:54.757383 | orchestrator | 2026-03-08 00:35:54.757392 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:35:54.757403 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-08 00:35:54.757415 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-08 00:35:54.757431 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-08 00:35:54.757448 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-08 00:35:54.757464 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-08 00:35:54.757480 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-08 00:35:54.757496 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-08 00:35:54.757512 | orchestrator | 2026-03-08 00:35:54.757530 | orchestrator | 2026-03-08 00:35:54.757555 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 00:35:54.757572 | orchestrator | Sunday 08 March 2026 00:35:54 +0000 (0:00:02.756) 0:00:19.618 ********** 2026-03-08 00:35:54.757589 | orchestrator | =============================================================================== 2026-03-08 00:35:54.757605 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.67s 2026-03-08 00:35:54.757645 | orchestrator | Install python3-docker -------------------------------------------------- 2.76s 2026-03-08 00:35:54.757656 | orchestrator | Apply netplan configuration --------------------------------------------- 1.96s 2026-03-08 00:35:54.757665 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.82s 2026-03-08 00:35:54.757675 | orchestrator | Apply netplan configuration --------------------------------------------- 1.65s 2026-03-08 00:35:54.757684 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.55s 2026-03-08 00:35:54.757694 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.52s 2026-03-08 00:35:54.757703 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.51s 2026-03-08 00:35:54.757713 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.30s 2026-03-08 00:35:54.757725 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.65s 2026-03-08 00:35:54.757742 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.57s 2026-03-08 00:35:54.757772 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.56s 2026-03-08 00:35:55.129851 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-03-08 00:36:06.987539 | orchestrator | 2026-03-08 00:36:06 | INFO  | Prepare task for execution of reboot. 2026-03-08 00:36:07.070001 | orchestrator | 2026-03-08 00:36:07 | INFO  | Task 9f2d69c6-5f6a-4c20-a32a-b177bc4c5d8f (reboot) was prepared for execution. 2026-03-08 00:36:07.070186 | orchestrator | 2026-03-08 00:36:07 | INFO  | It takes a moment until task 9f2d69c6-5f6a-4c20-a32a-b177bc4c5d8f (reboot) has been started and output is visible here. 2026-03-08 00:36:17.517066 | orchestrator | 2026-03-08 00:36:17.517161 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-08 00:36:17.517178 | orchestrator | 2026-03-08 00:36:17.517191 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-08 00:36:17.517225 | orchestrator | Sunday 08 March 2026 00:36:11 +0000 (0:00:00.209) 0:00:00.209 ********** 2026-03-08 00:36:17.517238 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:36:17.517249 | orchestrator | 2026-03-08 00:36:17.517260 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-08 00:36:17.517271 | orchestrator | Sunday 08 March 2026 00:36:11 +0000 (0:00:00.110) 0:00:00.319 ********** 2026-03-08 00:36:17.517282 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:36:17.517293 | orchestrator | 2026-03-08 00:36:17.517304 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-08 00:36:17.517314 | orchestrator | Sunday 08 March 2026 00:36:12 +0000 (0:00:01.051) 0:00:01.371 ********** 2026-03-08 00:36:17.517325 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:36:17.517335 | orchestrator | 2026-03-08 00:36:17.517346 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-08 00:36:17.517357 | orchestrator | 2026-03-08 00:36:17.517368 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-08 00:36:17.517378 | orchestrator | Sunday 08 March 2026 00:36:12 +0000 (0:00:00.131) 0:00:01.502 ********** 2026-03-08 00:36:17.517389 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:36:17.517399 | orchestrator | 2026-03-08 00:36:17.517410 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-08 00:36:17.517421 | orchestrator | Sunday 08 March 2026 00:36:12 +0000 (0:00:00.106) 0:00:01.608 ********** 2026-03-08 00:36:17.517431 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:36:17.517442 | orchestrator | 2026-03-08 00:36:17.517453 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-08 00:36:17.517464 | orchestrator | Sunday 08 March 2026 00:36:13 +0000 (0:00:00.662) 0:00:02.270 ********** 2026-03-08 00:36:17.517475 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:36:17.517486 | orchestrator | 2026-03-08 00:36:17.517497 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-08 00:36:17.517507 | orchestrator | 2026-03-08 00:36:17.517518 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-08 00:36:17.517529 | orchestrator | Sunday 08 March 2026 00:36:13 +0000 (0:00:00.115) 0:00:02.386 ********** 2026-03-08 00:36:17.517539 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:36:17.517550 | orchestrator | 2026-03-08 00:36:17.517560 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-08 00:36:17.517571 | orchestrator | Sunday 08 March 2026 00:36:13 +0000 (0:00:00.210) 0:00:02.597 ********** 2026-03-08 00:36:17.517644 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:36:17.517665 | orchestrator | 2026-03-08 00:36:17.517682 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-08 00:36:17.517700 | orchestrator | Sunday 08 March 2026 00:36:14 +0000 (0:00:00.703) 0:00:03.301 ********** 2026-03-08 00:36:17.517719 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:36:17.517739 | orchestrator | 2026-03-08 00:36:17.517760 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-08 00:36:17.517778 | orchestrator | 2026-03-08 00:36:17.517796 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-08 00:36:17.517810 | orchestrator | Sunday 08 March 2026 00:36:14 +0000 (0:00:00.136) 0:00:03.437 ********** 2026-03-08 00:36:17.517822 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:36:17.517834 | orchestrator | 2026-03-08 00:36:17.517846 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-08 00:36:17.517872 | orchestrator | Sunday 08 March 2026 00:36:14 +0000 (0:00:00.097) 0:00:03.535 ********** 2026-03-08 00:36:17.517883 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:36:17.517894 | orchestrator | 2026-03-08 00:36:17.517905 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-08 00:36:17.517916 | orchestrator | Sunday 08 March 2026 00:36:15 +0000 (0:00:00.695) 0:00:04.230 ********** 2026-03-08 00:36:17.517927 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:36:17.517947 | orchestrator | 2026-03-08 00:36:17.517958 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-08 00:36:17.517969 | orchestrator | 2026-03-08 00:36:17.517980 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-08 00:36:17.517991 | orchestrator | Sunday 08 March 2026 00:36:15 +0000 (0:00:00.112) 0:00:04.343 ********** 2026-03-08 00:36:17.518002 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:36:17.518012 | orchestrator | 2026-03-08 00:36:17.518082 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-08 00:36:17.518094 | orchestrator | Sunday 08 March 2026 00:36:15 +0000 (0:00:00.095) 0:00:04.438 ********** 2026-03-08 00:36:17.518104 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:36:17.518115 | orchestrator | 2026-03-08 00:36:17.518126 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-08 00:36:17.518136 | orchestrator | Sunday 08 March 2026 00:36:16 +0000 (0:00:00.640) 0:00:05.079 ********** 2026-03-08 00:36:17.518147 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:36:17.518157 | orchestrator | 2026-03-08 00:36:17.518168 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-08 00:36:17.518179 | orchestrator | 2026-03-08 00:36:17.518190 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-08 00:36:17.518200 | orchestrator | Sunday 08 March 2026 00:36:16 +0000 (0:00:00.124) 0:00:05.204 ********** 2026-03-08 00:36:17.518211 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:36:17.518222 | orchestrator | 2026-03-08 00:36:17.518232 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-08 00:36:17.518243 | orchestrator | Sunday 08 March 2026 00:36:16 +0000 (0:00:00.124) 0:00:05.329 ********** 2026-03-08 00:36:17.518254 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:36:17.518264 | orchestrator | 2026-03-08 00:36:17.518275 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-08 00:36:17.518286 | orchestrator | Sunday 08 March 2026 00:36:17 +0000 (0:00:00.687) 0:00:06.016 ********** 2026-03-08 00:36:17.518314 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:36:17.518326 | orchestrator | 2026-03-08 00:36:17.518337 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:36:17.518349 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-08 00:36:17.518361 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-08 00:36:17.518371 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-08 00:36:17.518382 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-08 00:36:17.518394 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-08 00:36:17.518415 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-08 00:36:17.518433 | orchestrator | 2026-03-08 00:36:17.518451 | orchestrator | 2026-03-08 00:36:17.518471 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 00:36:17.518489 | orchestrator | Sunday 08 March 2026 00:36:17 +0000 (0:00:00.036) 0:00:06.053 ********** 2026-03-08 00:36:17.518506 | orchestrator | =============================================================================== 2026-03-08 00:36:17.518525 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.44s 2026-03-08 00:36:17.518543 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.74s 2026-03-08 00:36:17.518633 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.66s 2026-03-08 00:36:17.854392 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-03-08 00:36:29.845456 | orchestrator | 2026-03-08 00:36:29 | INFO  | Prepare task for execution of wait-for-connection. 2026-03-08 00:36:29.919876 | orchestrator | 2026-03-08 00:36:29 | INFO  | Task 36cbc133-8b2d-4b19-b27d-60ccf8c2b6ea (wait-for-connection) was prepared for execution. 2026-03-08 00:36:29.919989 | orchestrator | 2026-03-08 00:36:29 | INFO  | It takes a moment until task 36cbc133-8b2d-4b19-b27d-60ccf8c2b6ea (wait-for-connection) has been started and output is visible here. 2026-03-08 00:36:45.556328 | orchestrator | 2026-03-08 00:36:45.556437 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-03-08 00:36:45.556455 | orchestrator | 2026-03-08 00:36:45.556467 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-03-08 00:36:45.556479 | orchestrator | Sunday 08 March 2026 00:36:33 +0000 (0:00:00.165) 0:00:00.165 ********** 2026-03-08 00:36:45.556490 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:36:45.556502 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:36:45.556514 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:36:45.556525 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:36:45.556584 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:36:45.556613 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:36:45.556625 | orchestrator | 2026-03-08 00:36:45.556636 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:36:45.556648 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:36:45.556661 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:36:45.556672 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:36:45.556683 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:36:45.556695 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:36:45.556706 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:36:45.556717 | orchestrator | 2026-03-08 00:36:45.556728 | orchestrator | 2026-03-08 00:36:45.556739 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 00:36:45.556750 | orchestrator | Sunday 08 March 2026 00:36:45 +0000 (0:00:11.428) 0:00:11.594 ********** 2026-03-08 00:36:45.556761 | orchestrator | =============================================================================== 2026-03-08 00:36:45.556771 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.43s 2026-03-08 00:36:45.874473 | orchestrator | + osism apply hddtemp 2026-03-08 00:36:57.864113 | orchestrator | 2026-03-08 00:36:57 | INFO  | Prepare task for execution of hddtemp. 2026-03-08 00:36:57.934640 | orchestrator | 2026-03-08 00:36:57 | INFO  | Task 69c9c312-df41-409b-94e1-d0726340b08f (hddtemp) was prepared for execution. 2026-03-08 00:36:57.934757 | orchestrator | 2026-03-08 00:36:57 | INFO  | It takes a moment until task 69c9c312-df41-409b-94e1-d0726340b08f (hddtemp) has been started and output is visible here. 2026-03-08 00:37:25.483962 | orchestrator | 2026-03-08 00:37:25.484075 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-03-08 00:37:25.484092 | orchestrator | 2026-03-08 00:37:25.484104 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-03-08 00:37:25.484116 | orchestrator | Sunday 08 March 2026 00:37:02 +0000 (0:00:00.271) 0:00:00.271 ********** 2026-03-08 00:37:25.484154 | orchestrator | ok: [testbed-manager] 2026-03-08 00:37:25.484167 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:37:25.484177 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:37:25.484188 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:37:25.484198 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:37:25.484210 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:37:25.484221 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:37:25.484231 | orchestrator | 2026-03-08 00:37:25.484242 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-03-08 00:37:25.484253 | orchestrator | Sunday 08 March 2026 00:37:03 +0000 (0:00:00.730) 0:00:01.001 ********** 2026-03-08 00:37:25.484266 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 00:37:25.484279 | orchestrator | 2026-03-08 00:37:25.484290 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-03-08 00:37:25.484301 | orchestrator | Sunday 08 March 2026 00:37:04 +0000 (0:00:01.138) 0:00:02.140 ********** 2026-03-08 00:37:25.484312 | orchestrator | ok: [testbed-manager] 2026-03-08 00:37:25.484322 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:37:25.484333 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:37:25.484343 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:37:25.484354 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:37:25.484364 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:37:25.484375 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:37:25.484385 | orchestrator | 2026-03-08 00:37:25.484396 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-03-08 00:37:25.484407 | orchestrator | Sunday 08 March 2026 00:37:06 +0000 (0:00:02.127) 0:00:04.267 ********** 2026-03-08 00:37:25.484417 | orchestrator | changed: [testbed-manager] 2026-03-08 00:37:25.484429 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:37:25.484440 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:37:25.484450 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:37:25.484460 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:37:25.484497 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:37:25.484511 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:37:25.484524 | orchestrator | 2026-03-08 00:37:25.484536 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-03-08 00:37:25.484549 | orchestrator | Sunday 08 March 2026 00:37:07 +0000 (0:00:01.152) 0:00:05.420 ********** 2026-03-08 00:37:25.484562 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:37:25.484574 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:37:25.484586 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:37:25.484598 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:37:25.484610 | orchestrator | ok: [testbed-manager] 2026-03-08 00:37:25.484622 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:37:25.484634 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:37:25.484646 | orchestrator | 2026-03-08 00:37:25.484659 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-03-08 00:37:25.484672 | orchestrator | Sunday 08 March 2026 00:37:08 +0000 (0:00:01.128) 0:00:06.548 ********** 2026-03-08 00:37:25.484684 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:37:25.484697 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:37:25.484709 | orchestrator | changed: [testbed-manager] 2026-03-08 00:37:25.484721 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:37:25.484747 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:37:25.484760 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:37:25.484773 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:37:25.484785 | orchestrator | 2026-03-08 00:37:25.484798 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-03-08 00:37:25.484811 | orchestrator | Sunday 08 March 2026 00:37:09 +0000 (0:00:00.813) 0:00:07.361 ********** 2026-03-08 00:37:25.484823 | orchestrator | changed: [testbed-manager] 2026-03-08 00:37:25.484836 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:37:25.484856 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:37:25.484867 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:37:25.484878 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:37:25.484888 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:37:25.484899 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:37:25.484909 | orchestrator | 2026-03-08 00:37:25.484920 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-03-08 00:37:25.484931 | orchestrator | Sunday 08 March 2026 00:37:22 +0000 (0:00:13.080) 0:00:20.442 ********** 2026-03-08 00:37:25.484942 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 00:37:25.484953 | orchestrator | 2026-03-08 00:37:25.484963 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-03-08 00:37:25.484974 | orchestrator | Sunday 08 March 2026 00:37:23 +0000 (0:00:00.926) 0:00:21.369 ********** 2026-03-08 00:37:25.484984 | orchestrator | changed: [testbed-manager] 2026-03-08 00:37:25.484995 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:37:25.485006 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:37:25.485016 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:37:25.485027 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:37:25.485037 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:37:25.485048 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:37:25.485058 | orchestrator | 2026-03-08 00:37:25.485069 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:37:25.485080 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:37:25.485112 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-08 00:37:25.485124 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-08 00:37:25.485134 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-08 00:37:25.485145 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-08 00:37:25.485156 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-08 00:37:25.485167 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-08 00:37:25.485177 | orchestrator | 2026-03-08 00:37:25.485188 | orchestrator | 2026-03-08 00:37:25.485199 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 00:37:25.485210 | orchestrator | Sunday 08 March 2026 00:37:25 +0000 (0:00:01.747) 0:00:23.116 ********** 2026-03-08 00:37:25.485220 | orchestrator | =============================================================================== 2026-03-08 00:37:25.485231 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 13.08s 2026-03-08 00:37:25.485242 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.13s 2026-03-08 00:37:25.485252 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.75s 2026-03-08 00:37:25.485263 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.15s 2026-03-08 00:37:25.485274 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.14s 2026-03-08 00:37:25.485284 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.13s 2026-03-08 00:37:25.485301 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 0.93s 2026-03-08 00:37:25.485312 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.81s 2026-03-08 00:37:25.485323 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.73s 2026-03-08 00:37:25.675523 | orchestrator | ++ semver latest 7.1.1 2026-03-08 00:37:25.730610 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-08 00:37:25.730703 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-08 00:37:25.730719 | orchestrator | + sudo systemctl restart manager.service 2026-03-08 00:38:05.591606 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-08 00:38:05.591716 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-03-08 00:38:05.591732 | orchestrator | + local max_attempts=60 2026-03-08 00:38:05.591745 | orchestrator | + local name=ceph-ansible 2026-03-08 00:38:05.591756 | orchestrator | + local attempt_num=1 2026-03-08 00:38:05.591767 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-08 00:38:05.624140 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-08 00:38:05.624230 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-08 00:38:05.624244 | orchestrator | + sleep 5 2026-03-08 00:38:10.630267 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-08 00:38:10.781165 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-08 00:38:10.781287 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-08 00:38:10.781304 | orchestrator | + sleep 5 2026-03-08 00:38:15.767584 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-08 00:38:15.793525 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-08 00:38:15.793616 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-08 00:38:15.793631 | orchestrator | + sleep 5 2026-03-08 00:38:20.796668 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-08 00:38:20.829252 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-08 00:38:20.829343 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-08 00:38:20.829356 | orchestrator | + sleep 5 2026-03-08 00:38:25.833383 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-08 00:38:25.864589 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-08 00:38:25.864686 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-08 00:38:25.864701 | orchestrator | + sleep 5 2026-03-08 00:38:30.869709 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-08 00:38:30.908896 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-08 00:38:30.908982 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-08 00:38:30.908992 | orchestrator | + sleep 5 2026-03-08 00:38:35.912897 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-08 00:38:35.948902 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-08 00:38:35.949016 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-08 00:38:35.949040 | orchestrator | + sleep 5 2026-03-08 00:38:40.953069 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-08 00:38:40.982987 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-08 00:38:40.983079 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-08 00:38:40.983093 | orchestrator | + sleep 5 2026-03-08 00:38:45.985262 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-08 00:38:46.026081 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-08 00:38:46.026187 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-08 00:38:46.026203 | orchestrator | + sleep 5 2026-03-08 00:38:51.028842 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-08 00:38:51.066126 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-08 00:38:51.066243 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-08 00:38:51.066270 | orchestrator | + sleep 5 2026-03-08 00:38:56.070268 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-08 00:38:56.113540 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-08 00:38:56.113632 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-08 00:38:56.113647 | orchestrator | + sleep 5 2026-03-08 00:39:01.117524 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-08 00:39:01.153896 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-08 00:39:01.153996 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-08 00:39:01.154215 | orchestrator | + sleep 5 2026-03-08 00:39:06.158511 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-08 00:39:06.204817 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-08 00:39:06.205113 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-08 00:39:06.205141 | orchestrator | + sleep 5 2026-03-08 00:39:11.209071 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-08 00:39:11.242588 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-08 00:39:11.242823 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-03-08 00:39:11.242860 | orchestrator | + local max_attempts=60 2026-03-08 00:39:11.242880 | orchestrator | + local name=kolla-ansible 2026-03-08 00:39:11.242945 | orchestrator | + local attempt_num=1 2026-03-08 00:39:11.242970 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-03-08 00:39:11.280418 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-08 00:39:11.281617 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-03-08 00:39:11.281671 | orchestrator | + local max_attempts=60 2026-03-08 00:39:11.281683 | orchestrator | + local name=osism-ansible 2026-03-08 00:39:11.281694 | orchestrator | + local attempt_num=1 2026-03-08 00:39:11.282732 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-03-08 00:39:11.316526 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-08 00:39:11.316622 | orchestrator | + [[ true == \t\r\u\e ]] 2026-03-08 00:39:11.316646 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-03-08 00:39:11.457206 | orchestrator | ARA in ceph-ansible already disabled. 2026-03-08 00:39:11.740025 | orchestrator | ARA in osism-ansible already disabled. 2026-03-08 00:39:11.894506 | orchestrator | ARA in osism-kubernetes already disabled. 2026-03-08 00:39:11.895310 | orchestrator | + osism apply gather-facts 2026-03-08 00:39:23.994949 | orchestrator | 2026-03-08 00:39:23 | INFO  | Prepare task for execution of gather-facts. 2026-03-08 00:39:24.064123 | orchestrator | 2026-03-08 00:39:24 | INFO  | Task fd91e581-6b55-4636-aae9-140dcc9738df (gather-facts) was prepared for execution. 2026-03-08 00:39:24.064960 | orchestrator | 2026-03-08 00:39:24 | INFO  | It takes a moment until task fd91e581-6b55-4636-aae9-140dcc9738df (gather-facts) has been started and output is visible here. 2026-03-08 00:39:36.884051 | orchestrator | 2026-03-08 00:39:36.884145 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-08 00:39:36.884160 | orchestrator | 2026-03-08 00:39:36.884172 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-08 00:39:36.884182 | orchestrator | Sunday 08 March 2026 00:39:27 +0000 (0:00:00.166) 0:00:00.166 ********** 2026-03-08 00:39:36.884193 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:39:36.884205 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:39:36.884216 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:39:36.884226 | orchestrator | ok: [testbed-manager] 2026-03-08 00:39:36.884237 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:39:36.884247 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:39:36.884257 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:39:36.884268 | orchestrator | 2026-03-08 00:39:36.884279 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-08 00:39:36.884290 | orchestrator | 2026-03-08 00:39:36.884300 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-08 00:39:36.884311 | orchestrator | Sunday 08 March 2026 00:39:36 +0000 (0:00:08.429) 0:00:08.595 ********** 2026-03-08 00:39:36.884322 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:39:36.884333 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:39:36.884344 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:39:36.884395 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:39:36.884406 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:39:36.884417 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:39:36.884428 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:39:36.884438 | orchestrator | 2026-03-08 00:39:36.884449 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:39:36.884460 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-08 00:39:36.884498 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-08 00:39:36.884510 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-08 00:39:36.884537 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-08 00:39:36.884549 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-08 00:39:36.884559 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-08 00:39:36.884570 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-08 00:39:36.884600 | orchestrator | 2026-03-08 00:39:36.884611 | orchestrator | 2026-03-08 00:39:36.884622 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 00:39:36.884633 | orchestrator | Sunday 08 March 2026 00:39:36 +0000 (0:00:00.448) 0:00:09.043 ********** 2026-03-08 00:39:36.884644 | orchestrator | =============================================================================== 2026-03-08 00:39:36.884654 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.43s 2026-03-08 00:39:36.884665 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.45s 2026-03-08 00:39:37.196918 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-03-08 00:39:37.208791 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-03-08 00:39:37.226090 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-03-08 00:39:37.245456 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-03-08 00:39:37.254436 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-03-08 00:39:37.267919 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/320-openstack-minimal.sh /usr/local/bin/deploy-openstack-minimal 2026-03-08 00:39:37.279943 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-03-08 00:39:37.291216 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-03-08 00:39:37.304612 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-03-08 00:39:37.324504 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade-manager.sh /usr/local/bin/upgrade-manager 2026-03-08 00:39:37.341073 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-03-08 00:39:37.361538 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-03-08 00:39:37.380730 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-03-08 00:39:37.399467 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-03-08 00:39:37.417678 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/320-openstack-minimal.sh /usr/local/bin/upgrade-openstack-minimal 2026-03-08 00:39:37.435691 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-03-08 00:39:37.450826 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-03-08 00:39:37.465893 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-03-08 00:39:37.482371 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-03-08 00:39:37.503214 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-03-08 00:39:37.520886 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-03-08 00:39:37.541574 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-03-08 00:39:37.555550 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-03-08 00:39:37.577419 | orchestrator | + [[ false == \t\r\u\e ]] 2026-03-08 00:39:38.060130 | orchestrator | ok: Runtime: 0:25:08.965494 2026-03-08 00:39:38.167345 | 2026-03-08 00:39:38.167550 | TASK [Deploy services] 2026-03-08 00:39:38.702386 | orchestrator | skipping: Conditional result was False 2026-03-08 00:39:38.711788 | 2026-03-08 00:39:38.711921 | TASK [Deploy in a nutshell] 2026-03-08 00:39:39.374566 | orchestrator | + set -e 2026-03-08 00:39:39.374695 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-08 00:39:39.374711 | orchestrator | ++ export INTERACTIVE=false 2026-03-08 00:39:39.374724 | orchestrator | ++ INTERACTIVE=false 2026-03-08 00:39:39.374732 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-08 00:39:39.374740 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-08 00:39:39.374749 | orchestrator | + source /opt/manager-vars.sh 2026-03-08 00:39:39.374778 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-08 00:39:39.374796 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-08 00:39:39.374804 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-08 00:39:39.374814 | orchestrator | ++ CEPH_VERSION=reef 2026-03-08 00:39:39.374822 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-08 00:39:39.374833 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-08 00:39:39.374840 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-08 00:39:39.374854 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-08 00:39:39.374860 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-08 00:39:39.374869 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-08 00:39:39.374876 | orchestrator | ++ export ARA=false 2026-03-08 00:39:39.374883 | orchestrator | ++ ARA=false 2026-03-08 00:39:39.374889 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-08 00:39:39.374897 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-08 00:39:39.374903 | orchestrator | ++ export TEMPEST=true 2026-03-08 00:39:39.374910 | orchestrator | ++ TEMPEST=true 2026-03-08 00:39:39.374916 | orchestrator | ++ export IS_ZUUL=true 2026-03-08 00:39:39.374923 | orchestrator | ++ IS_ZUUL=true 2026-03-08 00:39:39.374930 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.206 2026-03-08 00:39:39.374937 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.206 2026-03-08 00:39:39.374943 | orchestrator | ++ export EXTERNAL_API=false 2026-03-08 00:39:39.375155 | orchestrator | 2026-03-08 00:39:39.375171 | orchestrator | # PULL IMAGES 2026-03-08 00:39:39.375179 | orchestrator | 2026-03-08 00:39:39.375201 | orchestrator | ++ EXTERNAL_API=false 2026-03-08 00:39:39.375209 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-08 00:39:39.375218 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-08 00:39:39.375226 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-08 00:39:39.375233 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-08 00:39:39.375241 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-08 00:39:39.375255 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-08 00:39:39.375264 | orchestrator | + echo 2026-03-08 00:39:39.375271 | orchestrator | + echo '# PULL IMAGES' 2026-03-08 00:39:39.375279 | orchestrator | + echo 2026-03-08 00:39:39.375649 | orchestrator | ++ semver latest 7.0.0 2026-03-08 00:39:39.423915 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-08 00:39:39.423982 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-08 00:39:39.423993 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-03-08 00:39:41.202175 | orchestrator | 2026-03-08 00:39:41 | INFO  | Trying to run play pull-images in environment custom 2026-03-08 00:39:51.251646 | orchestrator | 2026-03-08 00:39:51 | INFO  | Prepare task for execution of pull-images. 2026-03-08 00:39:51.321729 | orchestrator | 2026-03-08 00:39:51 | INFO  | Task 92471685-28e0-4a08-9513-5e432dd33d00 (pull-images) was prepared for execution. 2026-03-08 00:39:51.321854 | orchestrator | 2026-03-08 00:39:51 | INFO  | Task 92471685-28e0-4a08-9513-5e432dd33d00 is running in background. No more output. Check ARA for logs. 2026-03-08 00:39:53.424138 | orchestrator | 2026-03-08 00:39:53 | INFO  | Trying to run play wipe-partitions in environment custom 2026-03-08 00:40:03.485928 | orchestrator | 2026-03-08 00:40:03 | INFO  | Prepare task for execution of wipe-partitions. 2026-03-08 00:40:03.554835 | orchestrator | 2026-03-08 00:40:03 | INFO  | Task 2a4f682b-7a49-4cea-a06a-f271a84ce15c (wipe-partitions) was prepared for execution. 2026-03-08 00:40:03.554919 | orchestrator | 2026-03-08 00:40:03 | INFO  | It takes a moment until task 2a4f682b-7a49-4cea-a06a-f271a84ce15c (wipe-partitions) has been started and output is visible here. 2026-03-08 00:40:15.207803 | orchestrator | 2026-03-08 00:40:15.207949 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-03-08 00:40:15.207967 | orchestrator | 2026-03-08 00:40:15.207979 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-03-08 00:40:15.208000 | orchestrator | Sunday 08 March 2026 00:40:07 +0000 (0:00:00.095) 0:00:00.095 ********** 2026-03-08 00:40:15.208047 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:40:15.208062 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:40:15.208073 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:40:15.208084 | orchestrator | 2026-03-08 00:40:15.208094 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-03-08 00:40:15.208105 | orchestrator | Sunday 08 March 2026 00:40:08 +0000 (0:00:00.483) 0:00:00.579 ********** 2026-03-08 00:40:15.208122 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:40:15.208134 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:40:15.208145 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:40:15.208156 | orchestrator | 2026-03-08 00:40:15.208166 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-03-08 00:40:15.208178 | orchestrator | Sunday 08 March 2026 00:40:08 +0000 (0:00:00.258) 0:00:00.838 ********** 2026-03-08 00:40:15.208189 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:40:15.208201 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:40:15.208211 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:40:15.208222 | orchestrator | 2026-03-08 00:40:15.208233 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-03-08 00:40:15.208244 | orchestrator | Sunday 08 March 2026 00:40:08 +0000 (0:00:00.502) 0:00:01.340 ********** 2026-03-08 00:40:15.208255 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:40:15.208266 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:40:15.208276 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:40:15.208287 | orchestrator | 2026-03-08 00:40:15.208298 | orchestrator | TASK [Check device availability] *********************************************** 2026-03-08 00:40:15.208309 | orchestrator | Sunday 08 March 2026 00:40:09 +0000 (0:00:00.223) 0:00:01.564 ********** 2026-03-08 00:40:15.208623 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-03-08 00:40:15.208654 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-03-08 00:40:15.208675 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-03-08 00:40:15.208691 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-03-08 00:40:15.208703 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-03-08 00:40:15.208714 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-03-08 00:40:15.208726 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-03-08 00:40:15.208737 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-03-08 00:40:15.208750 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-03-08 00:40:15.208762 | orchestrator | 2026-03-08 00:40:15.208774 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-03-08 00:40:15.208786 | orchestrator | Sunday 08 March 2026 00:40:10 +0000 (0:00:01.052) 0:00:02.616 ********** 2026-03-08 00:40:15.208798 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-03-08 00:40:15.208809 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-03-08 00:40:15.208821 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-03-08 00:40:15.208833 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-03-08 00:40:15.208846 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-03-08 00:40:15.208859 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-03-08 00:40:15.208870 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-03-08 00:40:15.208884 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-03-08 00:40:15.208892 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-03-08 00:40:15.208900 | orchestrator | 2026-03-08 00:40:15.208922 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-03-08 00:40:15.208930 | orchestrator | Sunday 08 March 2026 00:40:11 +0000 (0:00:01.452) 0:00:04.069 ********** 2026-03-08 00:40:15.208938 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-03-08 00:40:15.208945 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-03-08 00:40:15.208953 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-03-08 00:40:15.208960 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-03-08 00:40:15.208987 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-03-08 00:40:15.208995 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-03-08 00:40:15.209002 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-03-08 00:40:15.209009 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-03-08 00:40:15.209016 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-03-08 00:40:15.209023 | orchestrator | 2026-03-08 00:40:15.209030 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-03-08 00:40:15.209037 | orchestrator | Sunday 08 March 2026 00:40:13 +0000 (0:00:02.167) 0:00:06.237 ********** 2026-03-08 00:40:15.209045 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:40:15.209053 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:40:15.209060 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:40:15.209067 | orchestrator | 2026-03-08 00:40:15.209074 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-03-08 00:40:15.209082 | orchestrator | Sunday 08 March 2026 00:40:14 +0000 (0:00:00.592) 0:00:06.830 ********** 2026-03-08 00:40:15.209089 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:40:15.209096 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:40:15.209103 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:40:15.209113 | orchestrator | 2026-03-08 00:40:15.209121 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:40:15.209129 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-08 00:40:15.209138 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-08 00:40:15.209174 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-08 00:40:15.209181 | orchestrator | 2026-03-08 00:40:15.209189 | orchestrator | 2026-03-08 00:40:15.209196 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 00:40:15.209203 | orchestrator | Sunday 08 March 2026 00:40:14 +0000 (0:00:00.635) 0:00:07.466 ********** 2026-03-08 00:40:15.209210 | orchestrator | =============================================================================== 2026-03-08 00:40:15.209218 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.17s 2026-03-08 00:40:15.209225 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.45s 2026-03-08 00:40:15.209232 | orchestrator | Check device availability ----------------------------------------------- 1.05s 2026-03-08 00:40:15.209239 | orchestrator | Request device events from the kernel ----------------------------------- 0.64s 2026-03-08 00:40:15.209247 | orchestrator | Reload udev rules ------------------------------------------------------- 0.59s 2026-03-08 00:40:15.209254 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.50s 2026-03-08 00:40:15.209261 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.48s 2026-03-08 00:40:15.209268 | orchestrator | Remove all rook related logical devices --------------------------------- 0.26s 2026-03-08 00:40:15.209276 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.22s 2026-03-08 00:40:27.318385 | orchestrator | 2026-03-08 00:40:27 | INFO  | Prepare task for execution of facts. 2026-03-08 00:40:27.397842 | orchestrator | 2026-03-08 00:40:27 | INFO  | Task 956e47b3-42a3-4927-b904-053249b3555d (facts) was prepared for execution. 2026-03-08 00:40:27.397965 | orchestrator | 2026-03-08 00:40:27 | INFO  | It takes a moment until task 956e47b3-42a3-4927-b904-053249b3555d (facts) has been started and output is visible here. 2026-03-08 00:40:39.224713 | orchestrator | 2026-03-08 00:40:39.224823 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-08 00:40:39.224840 | orchestrator | 2026-03-08 00:40:39.224881 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-08 00:40:39.224893 | orchestrator | Sunday 08 March 2026 00:40:31 +0000 (0:00:00.195) 0:00:00.195 ********** 2026-03-08 00:40:39.224904 | orchestrator | ok: [testbed-manager] 2026-03-08 00:40:39.224916 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:40:39.224927 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:40:39.224937 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:40:39.224948 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:40:39.224959 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:40:39.224969 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:40:39.224980 | orchestrator | 2026-03-08 00:40:39.224991 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-08 00:40:39.225001 | orchestrator | Sunday 08 March 2026 00:40:32 +0000 (0:00:00.909) 0:00:01.104 ********** 2026-03-08 00:40:39.225012 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:40:39.225024 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:40:39.225034 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:40:39.225045 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:40:39.225055 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:40:39.225066 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:40:39.225076 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:40:39.225087 | orchestrator | 2026-03-08 00:40:39.225098 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-08 00:40:39.225128 | orchestrator | 2026-03-08 00:40:39.225140 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-08 00:40:39.225151 | orchestrator | Sunday 08 March 2026 00:40:33 +0000 (0:00:01.077) 0:00:02.182 ********** 2026-03-08 00:40:39.225162 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:40:39.225173 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:40:39.225184 | orchestrator | ok: [testbed-manager] 2026-03-08 00:40:39.225194 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:40:39.225205 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:40:39.225216 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:40:39.225226 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:40:39.225239 | orchestrator | 2026-03-08 00:40:39.225252 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-08 00:40:39.225266 | orchestrator | 2026-03-08 00:40:39.225278 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-08 00:40:39.225291 | orchestrator | Sunday 08 March 2026 00:40:38 +0000 (0:00:04.900) 0:00:07.082 ********** 2026-03-08 00:40:39.225352 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:40:39.225365 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:40:39.225378 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:40:39.225390 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:40:39.225403 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:40:39.225415 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:40:39.225427 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:40:39.225439 | orchestrator | 2026-03-08 00:40:39.225452 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:40:39.225466 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-08 00:40:39.225480 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-08 00:40:39.225493 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-08 00:40:39.225506 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-08 00:40:39.225553 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-08 00:40:39.225626 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-08 00:40:39.225645 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-08 00:40:39.225664 | orchestrator | 2026-03-08 00:40:39.225682 | orchestrator | 2026-03-08 00:40:39.225698 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 00:40:39.225710 | orchestrator | Sunday 08 March 2026 00:40:38 +0000 (0:00:00.559) 0:00:07.642 ********** 2026-03-08 00:40:39.225720 | orchestrator | =============================================================================== 2026-03-08 00:40:39.225731 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.90s 2026-03-08 00:40:39.225742 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.08s 2026-03-08 00:40:39.225753 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 0.91s 2026-03-08 00:40:39.225764 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.56s 2026-03-08 00:40:41.558288 | orchestrator | 2026-03-08 00:40:41 | INFO  | Prepare task for execution of ceph-configure-lvm-volumes. 2026-03-08 00:40:41.628557 | orchestrator | 2026-03-08 00:40:41 | INFO  | Task 6ca4e60e-e89a-4c61-984d-0186dbd80ae4 (ceph-configure-lvm-volumes) was prepared for execution. 2026-03-08 00:40:41.628659 | orchestrator | 2026-03-08 00:40:41 | INFO  | It takes a moment until task 6ca4e60e-e89a-4c61-984d-0186dbd80ae4 (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-03-08 00:40:52.170608 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-08 00:40:52.170749 | orchestrator | 2.16.14 2026-03-08 00:40:52.170776 | orchestrator | 2026-03-08 00:40:52.170795 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-08 00:40:52.170814 | orchestrator | 2026-03-08 00:40:52.170832 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-08 00:40:52.170849 | orchestrator | Sunday 08 March 2026 00:40:46 +0000 (0:00:00.308) 0:00:00.308 ********** 2026-03-08 00:40:52.170867 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-08 00:40:52.170886 | orchestrator | 2026-03-08 00:40:52.170905 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-08 00:40:52.170923 | orchestrator | Sunday 08 March 2026 00:40:46 +0000 (0:00:00.196) 0:00:00.505 ********** 2026-03-08 00:40:52.170942 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:40:52.170961 | orchestrator | 2026-03-08 00:40:52.170978 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:40:52.170995 | orchestrator | Sunday 08 March 2026 00:40:46 +0000 (0:00:00.174) 0:00:00.679 ********** 2026-03-08 00:40:52.171026 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-03-08 00:40:52.171045 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-03-08 00:40:52.171065 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-03-08 00:40:52.171086 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-03-08 00:40:52.171108 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-03-08 00:40:52.171129 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-03-08 00:40:52.171151 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-03-08 00:40:52.171173 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-03-08 00:40:52.171195 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-03-08 00:40:52.171216 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-03-08 00:40:52.171270 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-03-08 00:40:52.171294 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-03-08 00:40:52.171316 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-03-08 00:40:52.171335 | orchestrator | 2026-03-08 00:40:52.171408 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:40:52.171427 | orchestrator | Sunday 08 March 2026 00:40:46 +0000 (0:00:00.358) 0:00:01.038 ********** 2026-03-08 00:40:52.171446 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:40:52.171469 | orchestrator | 2026-03-08 00:40:52.171484 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:40:52.171500 | orchestrator | Sunday 08 March 2026 00:40:46 +0000 (0:00:00.175) 0:00:01.213 ********** 2026-03-08 00:40:52.171516 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:40:52.171531 | orchestrator | 2026-03-08 00:40:52.171546 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:40:52.171569 | orchestrator | Sunday 08 March 2026 00:40:47 +0000 (0:00:00.162) 0:00:01.376 ********** 2026-03-08 00:40:52.171587 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:40:52.171603 | orchestrator | 2026-03-08 00:40:52.171619 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:40:52.171634 | orchestrator | Sunday 08 March 2026 00:40:47 +0000 (0:00:00.176) 0:00:01.552 ********** 2026-03-08 00:40:52.171650 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:40:52.171668 | orchestrator | 2026-03-08 00:40:52.171685 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:40:52.171701 | orchestrator | Sunday 08 March 2026 00:40:47 +0000 (0:00:00.170) 0:00:01.723 ********** 2026-03-08 00:40:52.171716 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:40:52.171731 | orchestrator | 2026-03-08 00:40:52.171747 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:40:52.171764 | orchestrator | Sunday 08 March 2026 00:40:47 +0000 (0:00:00.184) 0:00:01.907 ********** 2026-03-08 00:40:52.171780 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:40:52.171796 | orchestrator | 2026-03-08 00:40:52.171812 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:40:52.171829 | orchestrator | Sunday 08 March 2026 00:40:47 +0000 (0:00:00.174) 0:00:02.082 ********** 2026-03-08 00:40:52.171846 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:40:52.171861 | orchestrator | 2026-03-08 00:40:52.171877 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:40:52.171893 | orchestrator | Sunday 08 March 2026 00:40:48 +0000 (0:00:00.189) 0:00:02.272 ********** 2026-03-08 00:40:52.171908 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:40:52.171925 | orchestrator | 2026-03-08 00:40:52.171940 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:40:52.171956 | orchestrator | Sunday 08 March 2026 00:40:48 +0000 (0:00:00.200) 0:00:02.472 ********** 2026-03-08 00:40:52.171973 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_c560df89-ac9f-43eb-b629-a1334440ff2f) 2026-03-08 00:40:52.171991 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_c560df89-ac9f-43eb-b629-a1334440ff2f) 2026-03-08 00:40:52.172008 | orchestrator | 2026-03-08 00:40:52.172025 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:40:52.172073 | orchestrator | Sunday 08 March 2026 00:40:48 +0000 (0:00:00.362) 0:00:02.835 ********** 2026-03-08 00:40:52.172091 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_d9cf7a23-7f28-4003-9453-869e07fd4fea) 2026-03-08 00:40:52.172107 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_d9cf7a23-7f28-4003-9453-869e07fd4fea) 2026-03-08 00:40:52.172123 | orchestrator | 2026-03-08 00:40:52.172154 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:40:52.172190 | orchestrator | Sunday 08 March 2026 00:40:49 +0000 (0:00:00.501) 0:00:03.336 ********** 2026-03-08 00:40:52.172206 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_26ccb454-a8ab-488a-9282-a29bd19f440f) 2026-03-08 00:40:52.172221 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_26ccb454-a8ab-488a-9282-a29bd19f440f) 2026-03-08 00:40:52.172237 | orchestrator | 2026-03-08 00:40:52.172254 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:40:52.172271 | orchestrator | Sunday 08 March 2026 00:40:49 +0000 (0:00:00.500) 0:00:03.837 ********** 2026-03-08 00:40:52.172286 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_f69177ca-c9b7-4ecf-919e-98158e504d7d) 2026-03-08 00:40:52.172301 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_f69177ca-c9b7-4ecf-919e-98158e504d7d) 2026-03-08 00:40:52.172317 | orchestrator | 2026-03-08 00:40:52.172333 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:40:52.172379 | orchestrator | Sunday 08 March 2026 00:40:50 +0000 (0:00:00.668) 0:00:04.506 ********** 2026-03-08 00:40:52.172399 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-08 00:40:52.172435 | orchestrator | 2026-03-08 00:40:52.172451 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:40:52.172467 | orchestrator | Sunday 08 March 2026 00:40:50 +0000 (0:00:00.301) 0:00:04.807 ********** 2026-03-08 00:40:52.172482 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-03-08 00:40:52.172498 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-03-08 00:40:52.172513 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-03-08 00:40:52.172528 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-03-08 00:40:52.172544 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-03-08 00:40:52.172560 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-03-08 00:40:52.172577 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-03-08 00:40:52.172592 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-03-08 00:40:52.172608 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-03-08 00:40:52.172623 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-03-08 00:40:52.172639 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-03-08 00:40:52.172654 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-03-08 00:40:52.172670 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-03-08 00:40:52.172686 | orchestrator | 2026-03-08 00:40:52.172701 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:40:52.172717 | orchestrator | Sunday 08 March 2026 00:40:50 +0000 (0:00:00.332) 0:00:05.140 ********** 2026-03-08 00:40:52.172733 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:40:52.172749 | orchestrator | 2026-03-08 00:40:52.172765 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:40:52.172781 | orchestrator | Sunday 08 March 2026 00:40:51 +0000 (0:00:00.185) 0:00:05.325 ********** 2026-03-08 00:40:52.172797 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:40:52.172814 | orchestrator | 2026-03-08 00:40:52.172830 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:40:52.172846 | orchestrator | Sunday 08 March 2026 00:40:51 +0000 (0:00:00.176) 0:00:05.501 ********** 2026-03-08 00:40:52.172862 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:40:52.172907 | orchestrator | 2026-03-08 00:40:52.172925 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:40:52.172942 | orchestrator | Sunday 08 March 2026 00:40:51 +0000 (0:00:00.173) 0:00:05.675 ********** 2026-03-08 00:40:52.172960 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:40:52.172978 | orchestrator | 2026-03-08 00:40:52.172998 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:40:52.173016 | orchestrator | Sunday 08 March 2026 00:40:51 +0000 (0:00:00.220) 0:00:05.895 ********** 2026-03-08 00:40:52.173032 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:40:52.173050 | orchestrator | 2026-03-08 00:40:52.173067 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:40:52.173084 | orchestrator | Sunday 08 March 2026 00:40:51 +0000 (0:00:00.172) 0:00:06.068 ********** 2026-03-08 00:40:52.173102 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:40:52.173119 | orchestrator | 2026-03-08 00:40:52.173137 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:40:52.173154 | orchestrator | Sunday 08 March 2026 00:40:51 +0000 (0:00:00.182) 0:00:06.251 ********** 2026-03-08 00:40:52.173173 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:40:52.173191 | orchestrator | 2026-03-08 00:40:52.173225 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:40:58.793165 | orchestrator | Sunday 08 March 2026 00:40:52 +0000 (0:00:00.168) 0:00:06.419 ********** 2026-03-08 00:40:58.793283 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:40:58.793304 | orchestrator | 2026-03-08 00:40:58.793323 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:40:58.793339 | orchestrator | Sunday 08 March 2026 00:40:52 +0000 (0:00:00.177) 0:00:06.597 ********** 2026-03-08 00:40:58.793387 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-03-08 00:40:58.793406 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-03-08 00:40:58.793422 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-03-08 00:40:58.793440 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-03-08 00:40:58.793456 | orchestrator | 2026-03-08 00:40:58.793473 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:40:58.793514 | orchestrator | Sunday 08 March 2026 00:40:53 +0000 (0:00:00.821) 0:00:07.419 ********** 2026-03-08 00:40:58.793531 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:40:58.793548 | orchestrator | 2026-03-08 00:40:58.793563 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:40:58.793579 | orchestrator | Sunday 08 March 2026 00:40:53 +0000 (0:00:00.174) 0:00:07.593 ********** 2026-03-08 00:40:58.793594 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:40:58.793610 | orchestrator | 2026-03-08 00:40:58.793626 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:40:58.793642 | orchestrator | Sunday 08 March 2026 00:40:53 +0000 (0:00:00.174) 0:00:07.768 ********** 2026-03-08 00:40:58.793658 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:40:58.793674 | orchestrator | 2026-03-08 00:40:58.793690 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:40:58.793707 | orchestrator | Sunday 08 March 2026 00:40:53 +0000 (0:00:00.176) 0:00:07.944 ********** 2026-03-08 00:40:58.793725 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:40:58.793741 | orchestrator | 2026-03-08 00:40:58.793758 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-08 00:40:58.793774 | orchestrator | Sunday 08 March 2026 00:40:53 +0000 (0:00:00.173) 0:00:08.118 ********** 2026-03-08 00:40:58.793791 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-03-08 00:40:58.793809 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-03-08 00:40:58.793826 | orchestrator | 2026-03-08 00:40:58.793842 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-08 00:40:58.793859 | orchestrator | Sunday 08 March 2026 00:40:54 +0000 (0:00:00.152) 0:00:08.270 ********** 2026-03-08 00:40:58.793910 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:40:58.793943 | orchestrator | 2026-03-08 00:40:58.793976 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-08 00:40:58.794008 | orchestrator | Sunday 08 March 2026 00:40:54 +0000 (0:00:00.123) 0:00:08.394 ********** 2026-03-08 00:40:58.794183 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:40:58.794206 | orchestrator | 2026-03-08 00:40:58.794225 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-08 00:40:58.794241 | orchestrator | Sunday 08 March 2026 00:40:54 +0000 (0:00:00.120) 0:00:08.514 ********** 2026-03-08 00:40:58.794258 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:40:58.794274 | orchestrator | 2026-03-08 00:40:58.794291 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-08 00:40:58.794309 | orchestrator | Sunday 08 March 2026 00:40:54 +0000 (0:00:00.111) 0:00:08.626 ********** 2026-03-08 00:40:58.794325 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:40:58.794395 | orchestrator | 2026-03-08 00:40:58.794417 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-08 00:40:58.794430 | orchestrator | Sunday 08 March 2026 00:40:54 +0000 (0:00:00.115) 0:00:08.741 ********** 2026-03-08 00:40:58.794446 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'fb6eff58-5334-5828-9091-c0c39e64aeb1'}}) 2026-03-08 00:40:58.794461 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e3bef375-74a7-543b-9618-1787c99aecbb'}}) 2026-03-08 00:40:58.794476 | orchestrator | 2026-03-08 00:40:58.794490 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-08 00:40:58.794505 | orchestrator | Sunday 08 March 2026 00:40:54 +0000 (0:00:00.135) 0:00:08.877 ********** 2026-03-08 00:40:58.794522 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'fb6eff58-5334-5828-9091-c0c39e64aeb1'}})  2026-03-08 00:40:58.794552 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e3bef375-74a7-543b-9618-1787c99aecbb'}})  2026-03-08 00:40:58.794582 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:40:58.794600 | orchestrator | 2026-03-08 00:40:58.794615 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-08 00:40:58.794632 | orchestrator | Sunday 08 March 2026 00:40:54 +0000 (0:00:00.124) 0:00:09.001 ********** 2026-03-08 00:40:58.794649 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'fb6eff58-5334-5828-9091-c0c39e64aeb1'}})  2026-03-08 00:40:58.794664 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e3bef375-74a7-543b-9618-1787c99aecbb'}})  2026-03-08 00:40:58.794680 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:40:58.794697 | orchestrator | 2026-03-08 00:40:58.794712 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-08 00:40:58.794728 | orchestrator | Sunday 08 March 2026 00:40:55 +0000 (0:00:00.261) 0:00:09.263 ********** 2026-03-08 00:40:58.794742 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'fb6eff58-5334-5828-9091-c0c39e64aeb1'}})  2026-03-08 00:40:58.794787 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e3bef375-74a7-543b-9618-1787c99aecbb'}})  2026-03-08 00:40:58.794806 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:40:58.794822 | orchestrator | 2026-03-08 00:40:58.794839 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-08 00:40:58.794855 | orchestrator | Sunday 08 March 2026 00:40:55 +0000 (0:00:00.146) 0:00:09.410 ********** 2026-03-08 00:40:58.794872 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:40:58.794889 | orchestrator | 2026-03-08 00:40:58.794906 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-08 00:40:58.794921 | orchestrator | Sunday 08 March 2026 00:40:55 +0000 (0:00:00.112) 0:00:09.522 ********** 2026-03-08 00:40:58.794936 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:40:58.794969 | orchestrator | 2026-03-08 00:40:58.794987 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-08 00:40:58.795004 | orchestrator | Sunday 08 March 2026 00:40:55 +0000 (0:00:00.128) 0:00:09.651 ********** 2026-03-08 00:40:58.795020 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:40:58.795038 | orchestrator | 2026-03-08 00:40:58.795054 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-08 00:40:58.795071 | orchestrator | Sunday 08 March 2026 00:40:55 +0000 (0:00:00.120) 0:00:09.771 ********** 2026-03-08 00:40:58.795088 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:40:58.795105 | orchestrator | 2026-03-08 00:40:58.795121 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-08 00:40:58.795138 | orchestrator | Sunday 08 March 2026 00:40:55 +0000 (0:00:00.127) 0:00:09.898 ********** 2026-03-08 00:40:58.795156 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:40:58.795172 | orchestrator | 2026-03-08 00:40:58.795190 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-08 00:40:58.795207 | orchestrator | Sunday 08 March 2026 00:40:55 +0000 (0:00:00.136) 0:00:10.034 ********** 2026-03-08 00:40:58.795225 | orchestrator | ok: [testbed-node-3] => { 2026-03-08 00:40:58.795240 | orchestrator |  "ceph_osd_devices": { 2026-03-08 00:40:58.795256 | orchestrator |  "sdb": { 2026-03-08 00:40:58.795272 | orchestrator |  "osd_lvm_uuid": "fb6eff58-5334-5828-9091-c0c39e64aeb1" 2026-03-08 00:40:58.795287 | orchestrator |  }, 2026-03-08 00:40:58.795302 | orchestrator |  "sdc": { 2026-03-08 00:40:58.795317 | orchestrator |  "osd_lvm_uuid": "e3bef375-74a7-543b-9618-1787c99aecbb" 2026-03-08 00:40:58.795333 | orchestrator |  } 2026-03-08 00:40:58.795375 | orchestrator |  } 2026-03-08 00:40:58.795393 | orchestrator | } 2026-03-08 00:40:58.795409 | orchestrator | 2026-03-08 00:40:58.795426 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-08 00:40:58.795444 | orchestrator | Sunday 08 March 2026 00:40:55 +0000 (0:00:00.146) 0:00:10.181 ********** 2026-03-08 00:40:58.795461 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:40:58.795479 | orchestrator | 2026-03-08 00:40:58.795497 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-08 00:40:58.795515 | orchestrator | Sunday 08 March 2026 00:40:56 +0000 (0:00:00.133) 0:00:10.315 ********** 2026-03-08 00:40:58.795532 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:40:58.795548 | orchestrator | 2026-03-08 00:40:58.795565 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-08 00:40:58.795582 | orchestrator | Sunday 08 March 2026 00:40:56 +0000 (0:00:00.133) 0:00:10.449 ********** 2026-03-08 00:40:58.795596 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:40:58.795612 | orchestrator | 2026-03-08 00:40:58.795629 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-08 00:40:58.795646 | orchestrator | Sunday 08 March 2026 00:40:56 +0000 (0:00:00.129) 0:00:10.578 ********** 2026-03-08 00:40:58.795663 | orchestrator | changed: [testbed-node-3] => { 2026-03-08 00:40:58.795681 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-08 00:40:58.795698 | orchestrator |  "ceph_osd_devices": { 2026-03-08 00:40:58.795716 | orchestrator |  "sdb": { 2026-03-08 00:40:58.795734 | orchestrator |  "osd_lvm_uuid": "fb6eff58-5334-5828-9091-c0c39e64aeb1" 2026-03-08 00:40:58.795752 | orchestrator |  }, 2026-03-08 00:40:58.795769 | orchestrator |  "sdc": { 2026-03-08 00:40:58.795786 | orchestrator |  "osd_lvm_uuid": "e3bef375-74a7-543b-9618-1787c99aecbb" 2026-03-08 00:40:58.795804 | orchestrator |  } 2026-03-08 00:40:58.795819 | orchestrator |  }, 2026-03-08 00:40:58.795837 | orchestrator |  "lvm_volumes": [ 2026-03-08 00:40:58.795852 | orchestrator |  { 2026-03-08 00:40:58.795868 | orchestrator |  "data": "osd-block-fb6eff58-5334-5828-9091-c0c39e64aeb1", 2026-03-08 00:40:58.795886 | orchestrator |  "data_vg": "ceph-fb6eff58-5334-5828-9091-c0c39e64aeb1" 2026-03-08 00:40:58.795917 | orchestrator |  }, 2026-03-08 00:40:58.795935 | orchestrator |  { 2026-03-08 00:40:58.795952 | orchestrator |  "data": "osd-block-e3bef375-74a7-543b-9618-1787c99aecbb", 2026-03-08 00:40:58.795969 | orchestrator |  "data_vg": "ceph-e3bef375-74a7-543b-9618-1787c99aecbb" 2026-03-08 00:40:58.795985 | orchestrator |  } 2026-03-08 00:40:58.796003 | orchestrator |  ] 2026-03-08 00:40:58.796020 | orchestrator |  } 2026-03-08 00:40:58.796038 | orchestrator | } 2026-03-08 00:40:58.796053 | orchestrator | 2026-03-08 00:40:58.796071 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-08 00:40:58.796086 | orchestrator | Sunday 08 March 2026 00:40:56 +0000 (0:00:00.422) 0:00:11.001 ********** 2026-03-08 00:40:58.796103 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-08 00:40:58.796120 | orchestrator | 2026-03-08 00:40:58.796137 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-08 00:40:58.796154 | orchestrator | 2026-03-08 00:40:58.796172 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-08 00:40:58.796189 | orchestrator | Sunday 08 March 2026 00:40:58 +0000 (0:00:01.617) 0:00:12.619 ********** 2026-03-08 00:40:58.796207 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-08 00:40:58.796223 | orchestrator | 2026-03-08 00:40:58.796240 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-08 00:40:58.796258 | orchestrator | Sunday 08 March 2026 00:40:58 +0000 (0:00:00.217) 0:00:12.837 ********** 2026-03-08 00:40:58.796275 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:40:58.796292 | orchestrator | 2026-03-08 00:40:58.796324 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:41:05.537332 | orchestrator | Sunday 08 March 2026 00:40:58 +0000 (0:00:00.209) 0:00:13.047 ********** 2026-03-08 00:41:05.537513 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-03-08 00:41:05.537533 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-03-08 00:41:05.537545 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-03-08 00:41:05.537556 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-03-08 00:41:05.537568 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-03-08 00:41:05.537579 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-03-08 00:41:05.537589 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-03-08 00:41:05.537605 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-03-08 00:41:05.537616 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-03-08 00:41:05.537628 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-03-08 00:41:05.537639 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-03-08 00:41:05.537650 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-03-08 00:41:05.537680 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-03-08 00:41:05.537692 | orchestrator | 2026-03-08 00:41:05.537704 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:41:05.537715 | orchestrator | Sunday 08 March 2026 00:40:59 +0000 (0:00:00.329) 0:00:13.376 ********** 2026-03-08 00:41:05.537726 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:41:05.537738 | orchestrator | 2026-03-08 00:41:05.537749 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:41:05.537921 | orchestrator | Sunday 08 March 2026 00:40:59 +0000 (0:00:00.169) 0:00:13.546 ********** 2026-03-08 00:41:05.537998 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:41:05.538080 | orchestrator | 2026-03-08 00:41:05.538095 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:41:05.538109 | orchestrator | Sunday 08 March 2026 00:40:59 +0000 (0:00:00.174) 0:00:13.721 ********** 2026-03-08 00:41:05.538178 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:41:05.538193 | orchestrator | 2026-03-08 00:41:05.538207 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:41:05.538236 | orchestrator | Sunday 08 March 2026 00:40:59 +0000 (0:00:00.173) 0:00:13.895 ********** 2026-03-08 00:41:05.538248 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:41:05.538259 | orchestrator | 2026-03-08 00:41:05.538292 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:41:05.538305 | orchestrator | Sunday 08 March 2026 00:40:59 +0000 (0:00:00.167) 0:00:14.062 ********** 2026-03-08 00:41:05.538316 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:41:05.538326 | orchestrator | 2026-03-08 00:41:05.538396 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:41:05.538409 | orchestrator | Sunday 08 March 2026 00:41:00 +0000 (0:00:00.432) 0:00:14.495 ********** 2026-03-08 00:41:05.538420 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:41:05.538430 | orchestrator | 2026-03-08 00:41:05.538441 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:41:05.538452 | orchestrator | Sunday 08 March 2026 00:41:00 +0000 (0:00:00.188) 0:00:14.684 ********** 2026-03-08 00:41:05.538493 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:41:05.538505 | orchestrator | 2026-03-08 00:41:05.538516 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:41:05.538527 | orchestrator | Sunday 08 March 2026 00:41:00 +0000 (0:00:00.184) 0:00:14.868 ********** 2026-03-08 00:41:05.538538 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:41:05.538600 | orchestrator | 2026-03-08 00:41:05.538614 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:41:05.538625 | orchestrator | Sunday 08 March 2026 00:41:00 +0000 (0:00:00.208) 0:00:15.077 ********** 2026-03-08 00:41:05.538636 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_544edfd2-ddc4-4596-85df-1c9b9e7c3b59) 2026-03-08 00:41:05.538648 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_544edfd2-ddc4-4596-85df-1c9b9e7c3b59) 2026-03-08 00:41:05.538658 | orchestrator | 2026-03-08 00:41:05.538669 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:41:05.538680 | orchestrator | Sunday 08 March 2026 00:41:01 +0000 (0:00:00.387) 0:00:15.464 ********** 2026-03-08 00:41:05.538691 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_581ffd65-22a4-4ef2-934b-fe47abf1be5c) 2026-03-08 00:41:05.538701 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_581ffd65-22a4-4ef2-934b-fe47abf1be5c) 2026-03-08 00:41:05.538712 | orchestrator | 2026-03-08 00:41:05.538723 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:41:05.538733 | orchestrator | Sunday 08 March 2026 00:41:01 +0000 (0:00:00.379) 0:00:15.844 ********** 2026-03-08 00:41:05.538744 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_2f73f377-a3b9-4553-a6d0-e21973e3a5e5) 2026-03-08 00:41:05.538755 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_2f73f377-a3b9-4553-a6d0-e21973e3a5e5) 2026-03-08 00:41:05.538765 | orchestrator | 2026-03-08 00:41:05.538776 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:41:05.538811 | orchestrator | Sunday 08 March 2026 00:41:01 +0000 (0:00:00.367) 0:00:16.212 ********** 2026-03-08 00:41:05.538822 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_1d4cf331-77e8-4e4e-b490-10f0636e01e9) 2026-03-08 00:41:05.538833 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_1d4cf331-77e8-4e4e-b490-10f0636e01e9) 2026-03-08 00:41:05.538844 | orchestrator | 2026-03-08 00:41:05.538869 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:41:05.538880 | orchestrator | Sunday 08 March 2026 00:41:02 +0000 (0:00:00.388) 0:00:16.600 ********** 2026-03-08 00:41:05.538891 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-08 00:41:05.538901 | orchestrator | 2026-03-08 00:41:05.538912 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:41:05.538923 | orchestrator | Sunday 08 March 2026 00:41:02 +0000 (0:00:00.296) 0:00:16.896 ********** 2026-03-08 00:41:05.538933 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-03-08 00:41:05.538944 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-03-08 00:41:05.538963 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-03-08 00:41:05.538974 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-03-08 00:41:05.538985 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-03-08 00:41:05.538995 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-03-08 00:41:05.539006 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-03-08 00:41:05.539016 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-03-08 00:41:05.539027 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-03-08 00:41:05.539037 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-03-08 00:41:05.539048 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-03-08 00:41:05.539058 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-03-08 00:41:05.539069 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-03-08 00:41:05.539079 | orchestrator | 2026-03-08 00:41:05.539090 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:41:05.539101 | orchestrator | Sunday 08 March 2026 00:41:02 +0000 (0:00:00.331) 0:00:17.227 ********** 2026-03-08 00:41:05.539111 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:41:05.539122 | orchestrator | 2026-03-08 00:41:05.539133 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:41:05.539143 | orchestrator | Sunday 08 March 2026 00:41:03 +0000 (0:00:00.490) 0:00:17.717 ********** 2026-03-08 00:41:05.539154 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:41:05.539164 | orchestrator | 2026-03-08 00:41:05.539176 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:41:05.539194 | orchestrator | Sunday 08 March 2026 00:41:03 +0000 (0:00:00.166) 0:00:17.884 ********** 2026-03-08 00:41:05.539212 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:41:05.539229 | orchestrator | 2026-03-08 00:41:05.539247 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:41:05.539265 | orchestrator | Sunday 08 March 2026 00:41:03 +0000 (0:00:00.163) 0:00:18.047 ********** 2026-03-08 00:41:05.539284 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:41:05.539303 | orchestrator | 2026-03-08 00:41:05.539315 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:41:05.539325 | orchestrator | Sunday 08 March 2026 00:41:03 +0000 (0:00:00.167) 0:00:18.214 ********** 2026-03-08 00:41:05.539358 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:41:05.539370 | orchestrator | 2026-03-08 00:41:05.539380 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:41:05.539391 | orchestrator | Sunday 08 March 2026 00:41:04 +0000 (0:00:00.164) 0:00:18.379 ********** 2026-03-08 00:41:05.539402 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:41:05.539424 | orchestrator | 2026-03-08 00:41:05.539434 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:41:05.539445 | orchestrator | Sunday 08 March 2026 00:41:04 +0000 (0:00:00.169) 0:00:18.549 ********** 2026-03-08 00:41:05.539456 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:41:05.539466 | orchestrator | 2026-03-08 00:41:05.539477 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:41:05.539488 | orchestrator | Sunday 08 March 2026 00:41:04 +0000 (0:00:00.175) 0:00:18.725 ********** 2026-03-08 00:41:05.539498 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:41:05.539509 | orchestrator | 2026-03-08 00:41:05.539520 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:41:05.539530 | orchestrator | Sunday 08 March 2026 00:41:04 +0000 (0:00:00.185) 0:00:18.910 ********** 2026-03-08 00:41:05.539541 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-03-08 00:41:05.539552 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-03-08 00:41:05.539563 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-03-08 00:41:05.539574 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-03-08 00:41:05.539585 | orchestrator | 2026-03-08 00:41:05.539595 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:41:05.539606 | orchestrator | Sunday 08 March 2026 00:41:05 +0000 (0:00:00.779) 0:00:19.689 ********** 2026-03-08 00:41:05.539617 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:41:11.194257 | orchestrator | 2026-03-08 00:41:11.194419 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:41:11.194440 | orchestrator | Sunday 08 March 2026 00:41:05 +0000 (0:00:00.177) 0:00:19.866 ********** 2026-03-08 00:41:11.194454 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:41:11.194466 | orchestrator | 2026-03-08 00:41:11.194477 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:41:11.194488 | orchestrator | Sunday 08 March 2026 00:41:05 +0000 (0:00:00.189) 0:00:20.056 ********** 2026-03-08 00:41:11.194499 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:41:11.194510 | orchestrator | 2026-03-08 00:41:11.194521 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:41:11.194532 | orchestrator | Sunday 08 March 2026 00:41:05 +0000 (0:00:00.174) 0:00:20.230 ********** 2026-03-08 00:41:11.194543 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:41:11.194554 | orchestrator | 2026-03-08 00:41:11.194565 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-08 00:41:11.194575 | orchestrator | Sunday 08 March 2026 00:41:06 +0000 (0:00:00.569) 0:00:20.799 ********** 2026-03-08 00:41:11.194586 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-03-08 00:41:11.194597 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-03-08 00:41:11.194608 | orchestrator | 2026-03-08 00:41:11.194619 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-08 00:41:11.194648 | orchestrator | Sunday 08 March 2026 00:41:06 +0000 (0:00:00.168) 0:00:20.967 ********** 2026-03-08 00:41:11.194660 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:41:11.194671 | orchestrator | 2026-03-08 00:41:11.194682 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-08 00:41:11.194693 | orchestrator | Sunday 08 March 2026 00:41:06 +0000 (0:00:00.133) 0:00:21.101 ********** 2026-03-08 00:41:11.194704 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:41:11.194719 | orchestrator | 2026-03-08 00:41:11.194738 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-08 00:41:11.194763 | orchestrator | Sunday 08 March 2026 00:41:06 +0000 (0:00:00.115) 0:00:21.216 ********** 2026-03-08 00:41:11.194782 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:41:11.194803 | orchestrator | 2026-03-08 00:41:11.194824 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-08 00:41:11.194843 | orchestrator | Sunday 08 March 2026 00:41:07 +0000 (0:00:00.141) 0:00:21.358 ********** 2026-03-08 00:41:11.194894 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:41:11.194917 | orchestrator | 2026-03-08 00:41:11.194938 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-08 00:41:11.194954 | orchestrator | Sunday 08 March 2026 00:41:07 +0000 (0:00:00.136) 0:00:21.494 ********** 2026-03-08 00:41:11.194967 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e9614fc2-8329-596c-937c-60ceb39d5fd3'}}) 2026-03-08 00:41:11.194981 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'eb569be8-41bf-5aa1-acb9-f145abad3137'}}) 2026-03-08 00:41:11.194993 | orchestrator | 2026-03-08 00:41:11.195006 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-08 00:41:11.195019 | orchestrator | Sunday 08 March 2026 00:41:07 +0000 (0:00:00.149) 0:00:21.644 ********** 2026-03-08 00:41:11.195033 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e9614fc2-8329-596c-937c-60ceb39d5fd3'}})  2026-03-08 00:41:11.195049 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'eb569be8-41bf-5aa1-acb9-f145abad3137'}})  2026-03-08 00:41:11.195061 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:41:11.195073 | orchestrator | 2026-03-08 00:41:11.195085 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-08 00:41:11.195096 | orchestrator | Sunday 08 March 2026 00:41:07 +0000 (0:00:00.157) 0:00:21.801 ********** 2026-03-08 00:41:11.195106 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e9614fc2-8329-596c-937c-60ceb39d5fd3'}})  2026-03-08 00:41:11.195117 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'eb569be8-41bf-5aa1-acb9-f145abad3137'}})  2026-03-08 00:41:11.195129 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:41:11.195139 | orchestrator | 2026-03-08 00:41:11.195150 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-08 00:41:11.195161 | orchestrator | Sunday 08 March 2026 00:41:07 +0000 (0:00:00.137) 0:00:21.939 ********** 2026-03-08 00:41:11.195171 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e9614fc2-8329-596c-937c-60ceb39d5fd3'}})  2026-03-08 00:41:11.195182 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'eb569be8-41bf-5aa1-acb9-f145abad3137'}})  2026-03-08 00:41:11.195193 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:41:11.195203 | orchestrator | 2026-03-08 00:41:11.195214 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-08 00:41:11.195225 | orchestrator | Sunday 08 March 2026 00:41:07 +0000 (0:00:00.145) 0:00:22.084 ********** 2026-03-08 00:41:11.195235 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:41:11.195246 | orchestrator | 2026-03-08 00:41:11.195257 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-08 00:41:11.195268 | orchestrator | Sunday 08 March 2026 00:41:07 +0000 (0:00:00.110) 0:00:22.195 ********** 2026-03-08 00:41:11.195278 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:41:11.195289 | orchestrator | 2026-03-08 00:41:11.195299 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-08 00:41:11.195310 | orchestrator | Sunday 08 March 2026 00:41:08 +0000 (0:00:00.123) 0:00:22.319 ********** 2026-03-08 00:41:11.195381 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:41:11.195394 | orchestrator | 2026-03-08 00:41:11.195405 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-08 00:41:11.195416 | orchestrator | Sunday 08 March 2026 00:41:08 +0000 (0:00:00.343) 0:00:22.662 ********** 2026-03-08 00:41:11.195427 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:41:11.195438 | orchestrator | 2026-03-08 00:41:11.195448 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-08 00:41:11.195459 | orchestrator | Sunday 08 March 2026 00:41:08 +0000 (0:00:00.106) 0:00:22.768 ********** 2026-03-08 00:41:11.195470 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:41:11.195490 | orchestrator | 2026-03-08 00:41:11.195501 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-08 00:41:11.195512 | orchestrator | Sunday 08 March 2026 00:41:08 +0000 (0:00:00.108) 0:00:22.877 ********** 2026-03-08 00:41:11.195523 | orchestrator | ok: [testbed-node-4] => { 2026-03-08 00:41:11.195534 | orchestrator |  "ceph_osd_devices": { 2026-03-08 00:41:11.195545 | orchestrator |  "sdb": { 2026-03-08 00:41:11.195555 | orchestrator |  "osd_lvm_uuid": "e9614fc2-8329-596c-937c-60ceb39d5fd3" 2026-03-08 00:41:11.195566 | orchestrator |  }, 2026-03-08 00:41:11.195577 | orchestrator |  "sdc": { 2026-03-08 00:41:11.195587 | orchestrator |  "osd_lvm_uuid": "eb569be8-41bf-5aa1-acb9-f145abad3137" 2026-03-08 00:41:11.195598 | orchestrator |  } 2026-03-08 00:41:11.195609 | orchestrator |  } 2026-03-08 00:41:11.195619 | orchestrator | } 2026-03-08 00:41:11.195630 | orchestrator | 2026-03-08 00:41:11.195642 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-08 00:41:11.195652 | orchestrator | Sunday 08 March 2026 00:41:08 +0000 (0:00:00.116) 0:00:22.993 ********** 2026-03-08 00:41:11.195663 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:41:11.195674 | orchestrator | 2026-03-08 00:41:11.195684 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-08 00:41:11.195695 | orchestrator | Sunday 08 March 2026 00:41:08 +0000 (0:00:00.110) 0:00:23.103 ********** 2026-03-08 00:41:11.195706 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:41:11.195716 | orchestrator | 2026-03-08 00:41:11.195727 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-08 00:41:11.195738 | orchestrator | Sunday 08 March 2026 00:41:08 +0000 (0:00:00.148) 0:00:23.252 ********** 2026-03-08 00:41:11.195749 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:41:11.195759 | orchestrator | 2026-03-08 00:41:11.195770 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-08 00:41:11.195788 | orchestrator | Sunday 08 March 2026 00:41:09 +0000 (0:00:00.130) 0:00:23.383 ********** 2026-03-08 00:41:11.195800 | orchestrator | changed: [testbed-node-4] => { 2026-03-08 00:41:11.195811 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-08 00:41:11.195821 | orchestrator |  "ceph_osd_devices": { 2026-03-08 00:41:11.195832 | orchestrator |  "sdb": { 2026-03-08 00:41:11.195843 | orchestrator |  "osd_lvm_uuid": "e9614fc2-8329-596c-937c-60ceb39d5fd3" 2026-03-08 00:41:11.195854 | orchestrator |  }, 2026-03-08 00:41:11.195864 | orchestrator |  "sdc": { 2026-03-08 00:41:11.195875 | orchestrator |  "osd_lvm_uuid": "eb569be8-41bf-5aa1-acb9-f145abad3137" 2026-03-08 00:41:11.195886 | orchestrator |  } 2026-03-08 00:41:11.195896 | orchestrator |  }, 2026-03-08 00:41:11.195907 | orchestrator |  "lvm_volumes": [ 2026-03-08 00:41:11.195918 | orchestrator |  { 2026-03-08 00:41:11.195929 | orchestrator |  "data": "osd-block-e9614fc2-8329-596c-937c-60ceb39d5fd3", 2026-03-08 00:41:11.195939 | orchestrator |  "data_vg": "ceph-e9614fc2-8329-596c-937c-60ceb39d5fd3" 2026-03-08 00:41:11.195950 | orchestrator |  }, 2026-03-08 00:41:11.195960 | orchestrator |  { 2026-03-08 00:41:11.195971 | orchestrator |  "data": "osd-block-eb569be8-41bf-5aa1-acb9-f145abad3137", 2026-03-08 00:41:11.195982 | orchestrator |  "data_vg": "ceph-eb569be8-41bf-5aa1-acb9-f145abad3137" 2026-03-08 00:41:11.195992 | orchestrator |  } 2026-03-08 00:41:11.196003 | orchestrator |  ] 2026-03-08 00:41:11.196013 | orchestrator |  } 2026-03-08 00:41:11.196024 | orchestrator | } 2026-03-08 00:41:11.196035 | orchestrator | 2026-03-08 00:41:11.196046 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-08 00:41:11.196056 | orchestrator | Sunday 08 March 2026 00:41:09 +0000 (0:00:00.177) 0:00:23.560 ********** 2026-03-08 00:41:11.196067 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-08 00:41:11.196078 | orchestrator | 2026-03-08 00:41:11.196097 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-08 00:41:11.196108 | orchestrator | 2026-03-08 00:41:11.196119 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-08 00:41:11.196130 | orchestrator | Sunday 08 March 2026 00:41:10 +0000 (0:00:00.903) 0:00:24.464 ********** 2026-03-08 00:41:11.196140 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-08 00:41:11.196151 | orchestrator | 2026-03-08 00:41:11.196162 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-08 00:41:11.196173 | orchestrator | Sunday 08 March 2026 00:41:10 +0000 (0:00:00.527) 0:00:24.992 ********** 2026-03-08 00:41:11.196183 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:41:11.196194 | orchestrator | 2026-03-08 00:41:11.196205 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:41:11.196216 | orchestrator | Sunday 08 March 2026 00:41:10 +0000 (0:00:00.211) 0:00:25.203 ********** 2026-03-08 00:41:11.196226 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-03-08 00:41:11.196237 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-03-08 00:41:11.196247 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-03-08 00:41:11.196258 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-03-08 00:41:11.196269 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-03-08 00:41:11.196287 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-03-08 00:41:18.179279 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-03-08 00:41:18.179450 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-03-08 00:41:18.179467 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-03-08 00:41:18.179478 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-03-08 00:41:18.179489 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-03-08 00:41:18.179500 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-03-08 00:41:18.179511 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-03-08 00:41:18.179522 | orchestrator | 2026-03-08 00:41:18.179534 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:41:18.179546 | orchestrator | Sunday 08 March 2026 00:41:11 +0000 (0:00:00.297) 0:00:25.500 ********** 2026-03-08 00:41:18.179556 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:41:18.179568 | orchestrator | 2026-03-08 00:41:18.179578 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:41:18.179589 | orchestrator | Sunday 08 March 2026 00:41:11 +0000 (0:00:00.165) 0:00:25.666 ********** 2026-03-08 00:41:18.179600 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:41:18.179610 | orchestrator | 2026-03-08 00:41:18.179621 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:41:18.179631 | orchestrator | Sunday 08 March 2026 00:41:11 +0000 (0:00:00.180) 0:00:25.846 ********** 2026-03-08 00:41:18.179642 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:41:18.179652 | orchestrator | 2026-03-08 00:41:18.179663 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:41:18.179673 | orchestrator | Sunday 08 March 2026 00:41:11 +0000 (0:00:00.191) 0:00:26.038 ********** 2026-03-08 00:41:18.179684 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:41:18.179695 | orchestrator | 2026-03-08 00:41:18.179705 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:41:18.179716 | orchestrator | Sunday 08 March 2026 00:41:11 +0000 (0:00:00.172) 0:00:26.211 ********** 2026-03-08 00:41:18.179750 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:41:18.179762 | orchestrator | 2026-03-08 00:41:18.179772 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:41:18.179786 | orchestrator | Sunday 08 March 2026 00:41:12 +0000 (0:00:00.189) 0:00:26.400 ********** 2026-03-08 00:41:18.179798 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:41:18.179811 | orchestrator | 2026-03-08 00:41:18.179824 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:41:18.179836 | orchestrator | Sunday 08 March 2026 00:41:12 +0000 (0:00:00.186) 0:00:26.587 ********** 2026-03-08 00:41:18.179848 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:41:18.179860 | orchestrator | 2026-03-08 00:41:18.179872 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:41:18.179884 | orchestrator | Sunday 08 March 2026 00:41:12 +0000 (0:00:00.187) 0:00:26.774 ********** 2026-03-08 00:41:18.179897 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:41:18.179909 | orchestrator | 2026-03-08 00:41:18.179921 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:41:18.179933 | orchestrator | Sunday 08 March 2026 00:41:12 +0000 (0:00:00.237) 0:00:27.012 ********** 2026-03-08 00:41:18.179945 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_1404ed60-298a-412c-bd4f-1e90f35345d3) 2026-03-08 00:41:18.179958 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_1404ed60-298a-412c-bd4f-1e90f35345d3) 2026-03-08 00:41:18.179970 | orchestrator | 2026-03-08 00:41:18.179982 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:41:18.179994 | orchestrator | Sunday 08 March 2026 00:41:13 +0000 (0:00:00.666) 0:00:27.679 ********** 2026-03-08 00:41:18.180023 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_a9abd44a-efa3-4fc9-810c-e4cec7375a49) 2026-03-08 00:41:18.180036 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_a9abd44a-efa3-4fc9-810c-e4cec7375a49) 2026-03-08 00:41:18.180048 | orchestrator | 2026-03-08 00:41:18.180060 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:41:18.180073 | orchestrator | Sunday 08 March 2026 00:41:13 +0000 (0:00:00.369) 0:00:28.048 ********** 2026-03-08 00:41:18.180085 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_70953687-69fa-4056-8e35-7089ee1c64ea) 2026-03-08 00:41:18.180098 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_70953687-69fa-4056-8e35-7089ee1c64ea) 2026-03-08 00:41:18.180110 | orchestrator | 2026-03-08 00:41:18.180123 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:41:18.180135 | orchestrator | Sunday 08 March 2026 00:41:14 +0000 (0:00:00.379) 0:00:28.427 ********** 2026-03-08 00:41:18.180146 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_7bc88367-6aaf-4ded-8fa4-f9240096c464) 2026-03-08 00:41:18.180157 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_7bc88367-6aaf-4ded-8fa4-f9240096c464) 2026-03-08 00:41:18.180167 | orchestrator | 2026-03-08 00:41:18.180178 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:41:18.180188 | orchestrator | Sunday 08 March 2026 00:41:14 +0000 (0:00:00.415) 0:00:28.843 ********** 2026-03-08 00:41:18.180198 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-08 00:41:18.180209 | orchestrator | 2026-03-08 00:41:18.180219 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:41:18.180248 | orchestrator | Sunday 08 March 2026 00:41:14 +0000 (0:00:00.286) 0:00:29.129 ********** 2026-03-08 00:41:18.180260 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-03-08 00:41:18.180270 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-03-08 00:41:18.180281 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-03-08 00:41:18.180292 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-03-08 00:41:18.180311 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-03-08 00:41:18.180359 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-03-08 00:41:18.180371 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-03-08 00:41:18.180382 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-03-08 00:41:18.180393 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-03-08 00:41:18.180403 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-03-08 00:41:18.180414 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-03-08 00:41:18.180424 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-03-08 00:41:18.180434 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-03-08 00:41:18.180445 | orchestrator | 2026-03-08 00:41:18.180456 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:41:18.180466 | orchestrator | Sunday 08 March 2026 00:41:15 +0000 (0:00:00.274) 0:00:29.404 ********** 2026-03-08 00:41:18.180477 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:41:18.180487 | orchestrator | 2026-03-08 00:41:18.180498 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:41:18.180508 | orchestrator | Sunday 08 March 2026 00:41:15 +0000 (0:00:00.144) 0:00:29.549 ********** 2026-03-08 00:41:18.180519 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:41:18.180530 | orchestrator | 2026-03-08 00:41:18.180540 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:41:18.180551 | orchestrator | Sunday 08 March 2026 00:41:15 +0000 (0:00:00.142) 0:00:29.691 ********** 2026-03-08 00:41:18.180561 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:41:18.180572 | orchestrator | 2026-03-08 00:41:18.180583 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:41:18.180593 | orchestrator | Sunday 08 March 2026 00:41:15 +0000 (0:00:00.166) 0:00:29.857 ********** 2026-03-08 00:41:18.180604 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:41:18.180614 | orchestrator | 2026-03-08 00:41:18.180625 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:41:18.180636 | orchestrator | Sunday 08 March 2026 00:41:15 +0000 (0:00:00.179) 0:00:30.036 ********** 2026-03-08 00:41:18.180646 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:41:18.180657 | orchestrator | 2026-03-08 00:41:18.180667 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:41:18.180678 | orchestrator | Sunday 08 March 2026 00:41:15 +0000 (0:00:00.184) 0:00:30.221 ********** 2026-03-08 00:41:18.180689 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:41:18.180699 | orchestrator | 2026-03-08 00:41:18.180710 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:41:18.180721 | orchestrator | Sunday 08 March 2026 00:41:16 +0000 (0:00:00.501) 0:00:30.722 ********** 2026-03-08 00:41:18.180731 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:41:18.180741 | orchestrator | 2026-03-08 00:41:18.180752 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:41:18.180763 | orchestrator | Sunday 08 March 2026 00:41:16 +0000 (0:00:00.162) 0:00:30.884 ********** 2026-03-08 00:41:18.180773 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:41:18.180784 | orchestrator | 2026-03-08 00:41:18.180794 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:41:18.180805 | orchestrator | Sunday 08 March 2026 00:41:16 +0000 (0:00:00.172) 0:00:31.057 ********** 2026-03-08 00:41:18.180816 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-03-08 00:41:18.180835 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-03-08 00:41:18.180846 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-03-08 00:41:18.180857 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-03-08 00:41:18.180867 | orchestrator | 2026-03-08 00:41:18.180878 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:41:18.180889 | orchestrator | Sunday 08 March 2026 00:41:17 +0000 (0:00:00.596) 0:00:31.653 ********** 2026-03-08 00:41:18.180899 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:41:18.180910 | orchestrator | 2026-03-08 00:41:18.180921 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:41:18.180945 | orchestrator | Sunday 08 March 2026 00:41:17 +0000 (0:00:00.206) 0:00:31.859 ********** 2026-03-08 00:41:18.180966 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:41:18.180978 | orchestrator | 2026-03-08 00:41:18.180988 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:41:18.180999 | orchestrator | Sunday 08 March 2026 00:41:17 +0000 (0:00:00.200) 0:00:32.060 ********** 2026-03-08 00:41:18.181010 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:41:18.181020 | orchestrator | 2026-03-08 00:41:18.181031 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:41:18.181041 | orchestrator | Sunday 08 March 2026 00:41:17 +0000 (0:00:00.174) 0:00:32.234 ********** 2026-03-08 00:41:18.181052 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:41:18.181062 | orchestrator | 2026-03-08 00:41:18.181079 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-08 00:41:21.986176 | orchestrator | Sunday 08 March 2026 00:41:18 +0000 (0:00:00.194) 0:00:32.429 ********** 2026-03-08 00:41:21.986271 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-03-08 00:41:21.986284 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-03-08 00:41:21.986296 | orchestrator | 2026-03-08 00:41:21.986307 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-08 00:41:21.986317 | orchestrator | Sunday 08 March 2026 00:41:18 +0000 (0:00:00.156) 0:00:32.585 ********** 2026-03-08 00:41:21.986385 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:41:21.986396 | orchestrator | 2026-03-08 00:41:21.986405 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-08 00:41:21.986415 | orchestrator | Sunday 08 March 2026 00:41:18 +0000 (0:00:00.147) 0:00:32.733 ********** 2026-03-08 00:41:21.986443 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:41:21.986453 | orchestrator | 2026-03-08 00:41:21.986463 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-08 00:41:21.986473 | orchestrator | Sunday 08 March 2026 00:41:18 +0000 (0:00:00.149) 0:00:32.882 ********** 2026-03-08 00:41:21.986483 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:41:21.986492 | orchestrator | 2026-03-08 00:41:21.986503 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-08 00:41:21.986513 | orchestrator | Sunday 08 March 2026 00:41:18 +0000 (0:00:00.268) 0:00:33.150 ********** 2026-03-08 00:41:21.986523 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:41:21.986533 | orchestrator | 2026-03-08 00:41:21.986543 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-08 00:41:21.986552 | orchestrator | Sunday 08 March 2026 00:41:19 +0000 (0:00:00.147) 0:00:33.298 ********** 2026-03-08 00:41:21.986562 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '5bde4b8d-c924-5d1f-8c9a-71f523250ead'}}) 2026-03-08 00:41:21.986576 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ad275011-1eda-59d8-b818-a96e3c140717'}}) 2026-03-08 00:41:21.986586 | orchestrator | 2026-03-08 00:41:21.986596 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-08 00:41:21.986605 | orchestrator | Sunday 08 March 2026 00:41:19 +0000 (0:00:00.148) 0:00:33.446 ********** 2026-03-08 00:41:21.986615 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '5bde4b8d-c924-5d1f-8c9a-71f523250ead'}})  2026-03-08 00:41:21.986645 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ad275011-1eda-59d8-b818-a96e3c140717'}})  2026-03-08 00:41:21.986662 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:41:21.986679 | orchestrator | 2026-03-08 00:41:21.986694 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-08 00:41:21.986709 | orchestrator | Sunday 08 March 2026 00:41:19 +0000 (0:00:00.142) 0:00:33.589 ********** 2026-03-08 00:41:21.986724 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '5bde4b8d-c924-5d1f-8c9a-71f523250ead'}})  2026-03-08 00:41:21.986739 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ad275011-1eda-59d8-b818-a96e3c140717'}})  2026-03-08 00:41:21.986753 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:41:21.986771 | orchestrator | 2026-03-08 00:41:21.986786 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-08 00:41:21.986801 | orchestrator | Sunday 08 March 2026 00:41:19 +0000 (0:00:00.165) 0:00:33.755 ********** 2026-03-08 00:41:21.986818 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '5bde4b8d-c924-5d1f-8c9a-71f523250ead'}})  2026-03-08 00:41:21.986834 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ad275011-1eda-59d8-b818-a96e3c140717'}})  2026-03-08 00:41:21.986849 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:41:21.986866 | orchestrator | 2026-03-08 00:41:21.986883 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-08 00:41:21.986897 | orchestrator | Sunday 08 March 2026 00:41:19 +0000 (0:00:00.170) 0:00:33.926 ********** 2026-03-08 00:41:21.986912 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:41:21.986928 | orchestrator | 2026-03-08 00:41:21.986945 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-08 00:41:21.986963 | orchestrator | Sunday 08 March 2026 00:41:19 +0000 (0:00:00.180) 0:00:34.107 ********** 2026-03-08 00:41:21.986978 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:41:21.986993 | orchestrator | 2026-03-08 00:41:21.987011 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-08 00:41:21.987026 | orchestrator | Sunday 08 March 2026 00:41:19 +0000 (0:00:00.122) 0:00:34.229 ********** 2026-03-08 00:41:21.987043 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:41:21.987059 | orchestrator | 2026-03-08 00:41:21.987077 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-08 00:41:21.987093 | orchestrator | Sunday 08 March 2026 00:41:20 +0000 (0:00:00.121) 0:00:34.351 ********** 2026-03-08 00:41:21.987109 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:41:21.987127 | orchestrator | 2026-03-08 00:41:21.987143 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-08 00:41:21.987158 | orchestrator | Sunday 08 March 2026 00:41:20 +0000 (0:00:00.117) 0:00:34.469 ********** 2026-03-08 00:41:21.987174 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:41:21.987191 | orchestrator | 2026-03-08 00:41:21.987209 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-08 00:41:21.987226 | orchestrator | Sunday 08 March 2026 00:41:20 +0000 (0:00:00.107) 0:00:34.576 ********** 2026-03-08 00:41:21.987244 | orchestrator | ok: [testbed-node-5] => { 2026-03-08 00:41:21.987260 | orchestrator |  "ceph_osd_devices": { 2026-03-08 00:41:21.987276 | orchestrator |  "sdb": { 2026-03-08 00:41:21.987374 | orchestrator |  "osd_lvm_uuid": "5bde4b8d-c924-5d1f-8c9a-71f523250ead" 2026-03-08 00:41:21.987398 | orchestrator |  }, 2026-03-08 00:41:21.987415 | orchestrator |  "sdc": { 2026-03-08 00:41:21.987432 | orchestrator |  "osd_lvm_uuid": "ad275011-1eda-59d8-b818-a96e3c140717" 2026-03-08 00:41:21.987448 | orchestrator |  } 2026-03-08 00:41:21.987465 | orchestrator |  } 2026-03-08 00:41:21.987478 | orchestrator | } 2026-03-08 00:41:21.987488 | orchestrator | 2026-03-08 00:41:21.987511 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-08 00:41:21.987521 | orchestrator | Sunday 08 March 2026 00:41:20 +0000 (0:00:00.115) 0:00:34.692 ********** 2026-03-08 00:41:21.987531 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:41:21.987540 | orchestrator | 2026-03-08 00:41:21.987551 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-08 00:41:21.987567 | orchestrator | Sunday 08 March 2026 00:41:20 +0000 (0:00:00.238) 0:00:34.931 ********** 2026-03-08 00:41:21.987579 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:41:21.987593 | orchestrator | 2026-03-08 00:41:21.987608 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-08 00:41:21.987618 | orchestrator | Sunday 08 March 2026 00:41:20 +0000 (0:00:00.126) 0:00:35.057 ********** 2026-03-08 00:41:21.987627 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:41:21.987637 | orchestrator | 2026-03-08 00:41:21.987646 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-08 00:41:21.987656 | orchestrator | Sunday 08 March 2026 00:41:20 +0000 (0:00:00.128) 0:00:35.185 ********** 2026-03-08 00:41:21.987665 | orchestrator | changed: [testbed-node-5] => { 2026-03-08 00:41:21.987675 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-08 00:41:21.987685 | orchestrator |  "ceph_osd_devices": { 2026-03-08 00:41:21.987694 | orchestrator |  "sdb": { 2026-03-08 00:41:21.987704 | orchestrator |  "osd_lvm_uuid": "5bde4b8d-c924-5d1f-8c9a-71f523250ead" 2026-03-08 00:41:21.987713 | orchestrator |  }, 2026-03-08 00:41:21.987723 | orchestrator |  "sdc": { 2026-03-08 00:41:21.987733 | orchestrator |  "osd_lvm_uuid": "ad275011-1eda-59d8-b818-a96e3c140717" 2026-03-08 00:41:21.987742 | orchestrator |  } 2026-03-08 00:41:21.987751 | orchestrator |  }, 2026-03-08 00:41:21.987761 | orchestrator |  "lvm_volumes": [ 2026-03-08 00:41:21.987771 | orchestrator |  { 2026-03-08 00:41:21.987780 | orchestrator |  "data": "osd-block-5bde4b8d-c924-5d1f-8c9a-71f523250ead", 2026-03-08 00:41:21.987790 | orchestrator |  "data_vg": "ceph-5bde4b8d-c924-5d1f-8c9a-71f523250ead" 2026-03-08 00:41:21.987800 | orchestrator |  }, 2026-03-08 00:41:21.987813 | orchestrator |  { 2026-03-08 00:41:21.987822 | orchestrator |  "data": "osd-block-ad275011-1eda-59d8-b818-a96e3c140717", 2026-03-08 00:41:21.987839 | orchestrator |  "data_vg": "ceph-ad275011-1eda-59d8-b818-a96e3c140717" 2026-03-08 00:41:21.987851 | orchestrator |  } 2026-03-08 00:41:21.987867 | orchestrator |  ] 2026-03-08 00:41:21.987882 | orchestrator |  } 2026-03-08 00:41:21.987897 | orchestrator | } 2026-03-08 00:41:21.987912 | orchestrator | 2026-03-08 00:41:21.987928 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-08 00:41:21.987944 | orchestrator | Sunday 08 March 2026 00:41:21 +0000 (0:00:00.199) 0:00:35.384 ********** 2026-03-08 00:41:21.987961 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-08 00:41:21.987977 | orchestrator | 2026-03-08 00:41:21.987993 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:41:21.988008 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-08 00:41:21.988024 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-08 00:41:21.988034 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-08 00:41:21.988044 | orchestrator | 2026-03-08 00:41:21.988053 | orchestrator | 2026-03-08 00:41:21.988062 | orchestrator | 2026-03-08 00:41:21.988072 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 00:41:21.988081 | orchestrator | Sunday 08 March 2026 00:41:21 +0000 (0:00:00.841) 0:00:36.226 ********** 2026-03-08 00:41:21.988099 | orchestrator | =============================================================================== 2026-03-08 00:41:21.988114 | orchestrator | Write configuration file ------------------------------------------------ 3.36s 2026-03-08 00:41:21.988129 | orchestrator | Add known links to the list of available block devices ------------------ 0.98s 2026-03-08 00:41:21.988155 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.94s 2026-03-08 00:41:21.988172 | orchestrator | Add known partitions to the list of available block devices ------------- 0.94s 2026-03-08 00:41:21.988190 | orchestrator | Add known partitions to the list of available block devices ------------- 0.82s 2026-03-08 00:41:21.988206 | orchestrator | Print configuration data ------------------------------------------------ 0.80s 2026-03-08 00:41:21.988223 | orchestrator | Add known partitions to the list of available block devices ------------- 0.78s 2026-03-08 00:41:21.988239 | orchestrator | Add known links to the list of available block devices ------------------ 0.67s 2026-03-08 00:41:21.988251 | orchestrator | Add known links to the list of available block devices ------------------ 0.67s 2026-03-08 00:41:21.988260 | orchestrator | Add known partitions to the list of available block devices ------------- 0.60s 2026-03-08 00:41:21.988275 | orchestrator | Get initial list of available block devices ----------------------------- 0.60s 2026-03-08 00:41:21.988291 | orchestrator | Set DB devices config data ---------------------------------------------- 0.59s 2026-03-08 00:41:21.988307 | orchestrator | Add known partitions to the list of available block devices ------------- 0.57s 2026-03-08 00:41:21.988365 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.57s 2026-03-08 00:41:22.241254 | orchestrator | Generate shared DB/WAL VG names ----------------------------------------- 0.52s 2026-03-08 00:41:22.241389 | orchestrator | Add known partitions to the list of available block devices ------------- 0.50s 2026-03-08 00:41:22.241407 | orchestrator | Add known links to the list of available block devices ------------------ 0.50s 2026-03-08 00:41:22.241420 | orchestrator | Add known links to the list of available block devices ------------------ 0.50s 2026-03-08 00:41:22.241431 | orchestrator | Add known partitions to the list of available block devices ------------- 0.49s 2026-03-08 00:41:22.241442 | orchestrator | Print WAL devices ------------------------------------------------------- 0.48s 2026-03-08 00:41:44.659173 | orchestrator | 2026-03-08 00:41:44 | INFO  | Task 867cf49e-b6b4-4301-b671-74deb487653e (sync inventory) is running in background. Output coming soon. 2026-03-08 00:42:09.161892 | orchestrator | 2026-03-08 00:41:46 | INFO  | Starting group_vars file reorganization 2026-03-08 00:42:09.162078 | orchestrator | 2026-03-08 00:41:46 | INFO  | Moved 0 file(s) to their respective directories 2026-03-08 00:42:09.162107 | orchestrator | 2026-03-08 00:41:46 | INFO  | Group_vars file reorganization completed 2026-03-08 00:42:09.162127 | orchestrator | 2026-03-08 00:41:48 | INFO  | Starting variable preparation from inventory 2026-03-08 00:42:09.162147 | orchestrator | 2026-03-08 00:41:50 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-03-08 00:42:09.162167 | orchestrator | 2026-03-08 00:41:50 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-03-08 00:42:09.162210 | orchestrator | 2026-03-08 00:41:50 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-03-08 00:42:09.162232 | orchestrator | 2026-03-08 00:41:50 | INFO  | 3 file(s) written, 6 host(s) processed 2026-03-08 00:42:09.162251 | orchestrator | 2026-03-08 00:41:50 | INFO  | Variable preparation completed 2026-03-08 00:42:09.162270 | orchestrator | 2026-03-08 00:41:52 | INFO  | Starting inventory overwrite handling 2026-03-08 00:42:09.162330 | orchestrator | 2026-03-08 00:41:52 | INFO  | Handling group overwrites in 99-overwrite 2026-03-08 00:42:09.162342 | orchestrator | 2026-03-08 00:41:52 | INFO  | Removing group frr:children from 60-generic 2026-03-08 00:42:09.162377 | orchestrator | 2026-03-08 00:41:52 | INFO  | Removing group netbird:children from 50-infrastructure 2026-03-08 00:42:09.162394 | orchestrator | 2026-03-08 00:41:52 | INFO  | Removing group ceph-mds from 50-ceph 2026-03-08 00:42:09.162414 | orchestrator | 2026-03-08 00:41:52 | INFO  | Removing group ceph-rgw from 50-ceph 2026-03-08 00:42:09.162435 | orchestrator | 2026-03-08 00:41:52 | INFO  | Handling group overwrites in 20-roles 2026-03-08 00:42:09.162454 | orchestrator | 2026-03-08 00:41:52 | INFO  | Removing group k3s_node from 50-infrastructure 2026-03-08 00:42:09.162473 | orchestrator | 2026-03-08 00:41:52 | INFO  | Removed 5 group(s) in total 2026-03-08 00:42:09.162493 | orchestrator | 2026-03-08 00:41:52 | INFO  | Inventory overwrite handling completed 2026-03-08 00:42:09.162512 | orchestrator | 2026-03-08 00:41:53 | INFO  | Starting merge of inventory files 2026-03-08 00:42:09.162532 | orchestrator | 2026-03-08 00:41:53 | INFO  | Inventory files merged successfully 2026-03-08 00:42:09.162550 | orchestrator | 2026-03-08 00:41:57 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-03-08 00:42:09.162568 | orchestrator | 2026-03-08 00:42:07 | INFO  | Successfully wrote ClusterShell configuration 2026-03-08 00:42:09.162583 | orchestrator | [master 5ff6fda] 2026-03-08-00-42 2026-03-08 00:42:09.162596 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2026-03-08 00:42:11.236140 | orchestrator | 2026-03-08 00:42:11 | INFO  | Prepare task for execution of ceph-create-lvm-devices. 2026-03-08 00:42:11.289937 | orchestrator | 2026-03-08 00:42:11 | INFO  | Task e9872bec-82e3-42f0-9341-492158d7355a (ceph-create-lvm-devices) was prepared for execution. 2026-03-08 00:42:11.290106 | orchestrator | 2026-03-08 00:42:11 | INFO  | It takes a moment until task e9872bec-82e3-42f0-9341-492158d7355a (ceph-create-lvm-devices) has been started and output is visible here. 2026-03-08 00:42:20.987621 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-08 00:42:20.987686 | orchestrator | 2.16.14 2026-03-08 00:42:20.987694 | orchestrator | 2026-03-08 00:42:20.987699 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-08 00:42:20.987705 | orchestrator | 2026-03-08 00:42:20.987710 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-08 00:42:20.987715 | orchestrator | Sunday 08 March 2026 00:42:14 +0000 (0:00:00.231) 0:00:00.231 ********** 2026-03-08 00:42:20.987720 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-08 00:42:20.987725 | orchestrator | 2026-03-08 00:42:20.987730 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-08 00:42:20.987735 | orchestrator | Sunday 08 March 2026 00:42:15 +0000 (0:00:00.276) 0:00:00.507 ********** 2026-03-08 00:42:20.987740 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:42:20.987745 | orchestrator | 2026-03-08 00:42:20.987750 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:42:20.987755 | orchestrator | Sunday 08 March 2026 00:42:15 +0000 (0:00:00.198) 0:00:00.705 ********** 2026-03-08 00:42:20.987759 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-03-08 00:42:20.987764 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-03-08 00:42:20.987769 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-03-08 00:42:20.987774 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-03-08 00:42:20.987779 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-03-08 00:42:20.987783 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-03-08 00:42:20.987788 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-03-08 00:42:20.987806 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-03-08 00:42:20.987815 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-03-08 00:42:20.987823 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-03-08 00:42:20.987830 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-03-08 00:42:20.987838 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-03-08 00:42:20.987845 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-03-08 00:42:20.987852 | orchestrator | 2026-03-08 00:42:20.987860 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:42:20.987867 | orchestrator | Sunday 08 March 2026 00:42:15 +0000 (0:00:00.425) 0:00:01.131 ********** 2026-03-08 00:42:20.987874 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:20.987882 | orchestrator | 2026-03-08 00:42:20.987890 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:42:20.987897 | orchestrator | Sunday 08 March 2026 00:42:15 +0000 (0:00:00.181) 0:00:01.313 ********** 2026-03-08 00:42:20.987904 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:20.987911 | orchestrator | 2026-03-08 00:42:20.987919 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:42:20.987926 | orchestrator | Sunday 08 March 2026 00:42:15 +0000 (0:00:00.172) 0:00:01.485 ********** 2026-03-08 00:42:20.987935 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:20.987942 | orchestrator | 2026-03-08 00:42:20.987950 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:42:20.987958 | orchestrator | Sunday 08 March 2026 00:42:16 +0000 (0:00:00.177) 0:00:01.662 ********** 2026-03-08 00:42:20.987965 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:20.987973 | orchestrator | 2026-03-08 00:42:20.987981 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:42:20.987989 | orchestrator | Sunday 08 March 2026 00:42:16 +0000 (0:00:00.182) 0:00:01.845 ********** 2026-03-08 00:42:20.987996 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:20.988003 | orchestrator | 2026-03-08 00:42:20.988010 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:42:20.988030 | orchestrator | Sunday 08 March 2026 00:42:16 +0000 (0:00:00.179) 0:00:02.025 ********** 2026-03-08 00:42:20.988038 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:20.988046 | orchestrator | 2026-03-08 00:42:20.988055 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:42:20.988063 | orchestrator | Sunday 08 March 2026 00:42:16 +0000 (0:00:00.190) 0:00:02.215 ********** 2026-03-08 00:42:20.988068 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:20.988073 | orchestrator | 2026-03-08 00:42:20.988078 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:42:20.988083 | orchestrator | Sunday 08 March 2026 00:42:16 +0000 (0:00:00.180) 0:00:02.396 ********** 2026-03-08 00:42:20.988088 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:20.988093 | orchestrator | 2026-03-08 00:42:20.988097 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:42:20.988102 | orchestrator | Sunday 08 March 2026 00:42:17 +0000 (0:00:00.184) 0:00:02.581 ********** 2026-03-08 00:42:20.988107 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_c560df89-ac9f-43eb-b629-a1334440ff2f) 2026-03-08 00:42:20.988113 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_c560df89-ac9f-43eb-b629-a1334440ff2f) 2026-03-08 00:42:20.988117 | orchestrator | 2026-03-08 00:42:20.988122 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:42:20.988137 | orchestrator | Sunday 08 March 2026 00:42:17 +0000 (0:00:00.368) 0:00:02.950 ********** 2026-03-08 00:42:20.988149 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_d9cf7a23-7f28-4003-9453-869e07fd4fea) 2026-03-08 00:42:20.988154 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_d9cf7a23-7f28-4003-9453-869e07fd4fea) 2026-03-08 00:42:20.988158 | orchestrator | 2026-03-08 00:42:20.988163 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:42:20.988168 | orchestrator | Sunday 08 March 2026 00:42:17 +0000 (0:00:00.463) 0:00:03.413 ********** 2026-03-08 00:42:20.988173 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_26ccb454-a8ab-488a-9282-a29bd19f440f) 2026-03-08 00:42:20.988177 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_26ccb454-a8ab-488a-9282-a29bd19f440f) 2026-03-08 00:42:20.988182 | orchestrator | 2026-03-08 00:42:20.988187 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:42:20.988192 | orchestrator | Sunday 08 March 2026 00:42:18 +0000 (0:00:00.549) 0:00:03.963 ********** 2026-03-08 00:42:20.988197 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_f69177ca-c9b7-4ecf-919e-98158e504d7d) 2026-03-08 00:42:20.988202 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_f69177ca-c9b7-4ecf-919e-98158e504d7d) 2026-03-08 00:42:20.988208 | orchestrator | 2026-03-08 00:42:20.988214 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:42:20.988219 | orchestrator | Sunday 08 March 2026 00:42:19 +0000 (0:00:00.669) 0:00:04.633 ********** 2026-03-08 00:42:20.988225 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-08 00:42:20.988230 | orchestrator | 2026-03-08 00:42:20.988236 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:42:20.988241 | orchestrator | Sunday 08 March 2026 00:42:19 +0000 (0:00:00.290) 0:00:04.923 ********** 2026-03-08 00:42:20.988247 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-03-08 00:42:20.988252 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-03-08 00:42:20.988258 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-03-08 00:42:20.988312 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-03-08 00:42:20.988320 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-03-08 00:42:20.988328 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-03-08 00:42:20.988334 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-03-08 00:42:20.988340 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-03-08 00:42:20.988345 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-03-08 00:42:20.988351 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-03-08 00:42:20.988356 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-03-08 00:42:20.988362 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-03-08 00:42:20.988367 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-03-08 00:42:20.988373 | orchestrator | 2026-03-08 00:42:20.988378 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:42:20.988384 | orchestrator | Sunday 08 March 2026 00:42:19 +0000 (0:00:00.352) 0:00:05.276 ********** 2026-03-08 00:42:20.988389 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:20.988394 | orchestrator | 2026-03-08 00:42:20.988400 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:42:20.988405 | orchestrator | Sunday 08 March 2026 00:42:19 +0000 (0:00:00.189) 0:00:05.466 ********** 2026-03-08 00:42:20.988415 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:20.988421 | orchestrator | 2026-03-08 00:42:20.988426 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:42:20.988431 | orchestrator | Sunday 08 March 2026 00:42:20 +0000 (0:00:00.166) 0:00:05.632 ********** 2026-03-08 00:42:20.988436 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:20.988442 | orchestrator | 2026-03-08 00:42:20.988447 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:42:20.988453 | orchestrator | Sunday 08 March 2026 00:42:20 +0000 (0:00:00.170) 0:00:05.803 ********** 2026-03-08 00:42:20.988459 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:20.988464 | orchestrator | 2026-03-08 00:42:20.988469 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:42:20.988475 | orchestrator | Sunday 08 March 2026 00:42:20 +0000 (0:00:00.165) 0:00:05.968 ********** 2026-03-08 00:42:20.988480 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:20.988486 | orchestrator | 2026-03-08 00:42:20.988491 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:42:20.988497 | orchestrator | Sunday 08 March 2026 00:42:20 +0000 (0:00:00.183) 0:00:06.152 ********** 2026-03-08 00:42:20.988502 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:20.988508 | orchestrator | 2026-03-08 00:42:20.988513 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:42:20.988518 | orchestrator | Sunday 08 March 2026 00:42:20 +0000 (0:00:00.156) 0:00:06.308 ********** 2026-03-08 00:42:20.988524 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:20.988530 | orchestrator | 2026-03-08 00:42:20.988539 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:42:28.445498 | orchestrator | Sunday 08 March 2026 00:42:20 +0000 (0:00:00.184) 0:00:06.492 ********** 2026-03-08 00:42:28.445593 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:28.445605 | orchestrator | 2026-03-08 00:42:28.445613 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:42:28.445620 | orchestrator | Sunday 08 March 2026 00:42:21 +0000 (0:00:00.166) 0:00:06.659 ********** 2026-03-08 00:42:28.445627 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-03-08 00:42:28.445634 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-03-08 00:42:28.445641 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-03-08 00:42:28.445648 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-03-08 00:42:28.445654 | orchestrator | 2026-03-08 00:42:28.445661 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:42:28.445668 | orchestrator | Sunday 08 March 2026 00:42:22 +0000 (0:00:00.851) 0:00:07.510 ********** 2026-03-08 00:42:28.445675 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:28.445681 | orchestrator | 2026-03-08 00:42:28.445688 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:42:28.445695 | orchestrator | Sunday 08 March 2026 00:42:22 +0000 (0:00:00.167) 0:00:07.678 ********** 2026-03-08 00:42:28.445702 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:28.445708 | orchestrator | 2026-03-08 00:42:28.445714 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:42:28.445720 | orchestrator | Sunday 08 March 2026 00:42:22 +0000 (0:00:00.196) 0:00:07.875 ********** 2026-03-08 00:42:28.445726 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:28.445732 | orchestrator | 2026-03-08 00:42:28.445738 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:42:28.445746 | orchestrator | Sunday 08 March 2026 00:42:22 +0000 (0:00:00.236) 0:00:08.112 ********** 2026-03-08 00:42:28.445752 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:28.445758 | orchestrator | 2026-03-08 00:42:28.445764 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-08 00:42:28.445770 | orchestrator | Sunday 08 March 2026 00:42:22 +0000 (0:00:00.197) 0:00:08.309 ********** 2026-03-08 00:42:28.445776 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:28.445804 | orchestrator | 2026-03-08 00:42:28.445811 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-08 00:42:28.445818 | orchestrator | Sunday 08 March 2026 00:42:22 +0000 (0:00:00.125) 0:00:08.435 ********** 2026-03-08 00:42:28.445826 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'fb6eff58-5334-5828-9091-c0c39e64aeb1'}}) 2026-03-08 00:42:28.445835 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e3bef375-74a7-543b-9618-1787c99aecbb'}}) 2026-03-08 00:42:28.445841 | orchestrator | 2026-03-08 00:42:28.445849 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-08 00:42:28.445856 | orchestrator | Sunday 08 March 2026 00:42:23 +0000 (0:00:00.179) 0:00:08.614 ********** 2026-03-08 00:42:28.445864 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-fb6eff58-5334-5828-9091-c0c39e64aeb1', 'data_vg': 'ceph-fb6eff58-5334-5828-9091-c0c39e64aeb1'}) 2026-03-08 00:42:28.445873 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-e3bef375-74a7-543b-9618-1787c99aecbb', 'data_vg': 'ceph-e3bef375-74a7-543b-9618-1787c99aecbb'}) 2026-03-08 00:42:28.445880 | orchestrator | 2026-03-08 00:42:28.445886 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-08 00:42:28.445894 | orchestrator | Sunday 08 March 2026 00:42:25 +0000 (0:00:02.089) 0:00:10.704 ********** 2026-03-08 00:42:28.445901 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fb6eff58-5334-5828-9091-c0c39e64aeb1', 'data_vg': 'ceph-fb6eff58-5334-5828-9091-c0c39e64aeb1'})  2026-03-08 00:42:28.445909 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e3bef375-74a7-543b-9618-1787c99aecbb', 'data_vg': 'ceph-e3bef375-74a7-543b-9618-1787c99aecbb'})  2026-03-08 00:42:28.445917 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:28.445923 | orchestrator | 2026-03-08 00:42:28.445931 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-08 00:42:28.445937 | orchestrator | Sunday 08 March 2026 00:42:25 +0000 (0:00:00.134) 0:00:10.838 ********** 2026-03-08 00:42:28.445944 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-fb6eff58-5334-5828-9091-c0c39e64aeb1', 'data_vg': 'ceph-fb6eff58-5334-5828-9091-c0c39e64aeb1'}) 2026-03-08 00:42:28.445951 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-e3bef375-74a7-543b-9618-1787c99aecbb', 'data_vg': 'ceph-e3bef375-74a7-543b-9618-1787c99aecbb'}) 2026-03-08 00:42:28.445958 | orchestrator | 2026-03-08 00:42:28.445982 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-08 00:42:28.445990 | orchestrator | Sunday 08 March 2026 00:42:26 +0000 (0:00:01.418) 0:00:12.256 ********** 2026-03-08 00:42:28.445997 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fb6eff58-5334-5828-9091-c0c39e64aeb1', 'data_vg': 'ceph-fb6eff58-5334-5828-9091-c0c39e64aeb1'})  2026-03-08 00:42:28.446004 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e3bef375-74a7-543b-9618-1787c99aecbb', 'data_vg': 'ceph-e3bef375-74a7-543b-9618-1787c99aecbb'})  2026-03-08 00:42:28.446011 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:28.446072 | orchestrator | 2026-03-08 00:42:28.446080 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-08 00:42:28.446087 | orchestrator | Sunday 08 March 2026 00:42:26 +0000 (0:00:00.145) 0:00:12.402 ********** 2026-03-08 00:42:28.446110 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:28.446115 | orchestrator | 2026-03-08 00:42:28.446120 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-08 00:42:28.446125 | orchestrator | Sunday 08 March 2026 00:42:27 +0000 (0:00:00.130) 0:00:12.533 ********** 2026-03-08 00:42:28.446129 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fb6eff58-5334-5828-9091-c0c39e64aeb1', 'data_vg': 'ceph-fb6eff58-5334-5828-9091-c0c39e64aeb1'})  2026-03-08 00:42:28.446134 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e3bef375-74a7-543b-9618-1787c99aecbb', 'data_vg': 'ceph-e3bef375-74a7-543b-9618-1787c99aecbb'})  2026-03-08 00:42:28.446145 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:28.446150 | orchestrator | 2026-03-08 00:42:28.446155 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-08 00:42:28.446162 | orchestrator | Sunday 08 March 2026 00:42:27 +0000 (0:00:00.259) 0:00:12.792 ********** 2026-03-08 00:42:28.446168 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:28.446174 | orchestrator | 2026-03-08 00:42:28.446180 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-08 00:42:28.446186 | orchestrator | Sunday 08 March 2026 00:42:27 +0000 (0:00:00.123) 0:00:12.915 ********** 2026-03-08 00:42:28.446192 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fb6eff58-5334-5828-9091-c0c39e64aeb1', 'data_vg': 'ceph-fb6eff58-5334-5828-9091-c0c39e64aeb1'})  2026-03-08 00:42:28.446198 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e3bef375-74a7-543b-9618-1787c99aecbb', 'data_vg': 'ceph-e3bef375-74a7-543b-9618-1787c99aecbb'})  2026-03-08 00:42:28.446204 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:28.446210 | orchestrator | 2026-03-08 00:42:28.446218 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-08 00:42:28.446224 | orchestrator | Sunday 08 March 2026 00:42:27 +0000 (0:00:00.123) 0:00:13.039 ********** 2026-03-08 00:42:28.446230 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:28.446237 | orchestrator | 2026-03-08 00:42:28.446244 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-08 00:42:28.446251 | orchestrator | Sunday 08 March 2026 00:42:27 +0000 (0:00:00.135) 0:00:13.175 ********** 2026-03-08 00:42:28.446274 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fb6eff58-5334-5828-9091-c0c39e64aeb1', 'data_vg': 'ceph-fb6eff58-5334-5828-9091-c0c39e64aeb1'})  2026-03-08 00:42:28.446286 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e3bef375-74a7-543b-9618-1787c99aecbb', 'data_vg': 'ceph-e3bef375-74a7-543b-9618-1787c99aecbb'})  2026-03-08 00:42:28.446292 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:28.446298 | orchestrator | 2026-03-08 00:42:28.446304 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-08 00:42:28.446309 | orchestrator | Sunday 08 March 2026 00:42:27 +0000 (0:00:00.141) 0:00:13.317 ********** 2026-03-08 00:42:28.446316 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:42:28.446323 | orchestrator | 2026-03-08 00:42:28.446329 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-08 00:42:28.446337 | orchestrator | Sunday 08 March 2026 00:42:27 +0000 (0:00:00.133) 0:00:13.451 ********** 2026-03-08 00:42:28.446344 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fb6eff58-5334-5828-9091-c0c39e64aeb1', 'data_vg': 'ceph-fb6eff58-5334-5828-9091-c0c39e64aeb1'})  2026-03-08 00:42:28.446350 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e3bef375-74a7-543b-9618-1787c99aecbb', 'data_vg': 'ceph-e3bef375-74a7-543b-9618-1787c99aecbb'})  2026-03-08 00:42:28.446356 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:28.446361 | orchestrator | 2026-03-08 00:42:28.446367 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-08 00:42:28.446374 | orchestrator | Sunday 08 March 2026 00:42:28 +0000 (0:00:00.131) 0:00:13.582 ********** 2026-03-08 00:42:28.446380 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fb6eff58-5334-5828-9091-c0c39e64aeb1', 'data_vg': 'ceph-fb6eff58-5334-5828-9091-c0c39e64aeb1'})  2026-03-08 00:42:28.446386 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e3bef375-74a7-543b-9618-1787c99aecbb', 'data_vg': 'ceph-e3bef375-74a7-543b-9618-1787c99aecbb'})  2026-03-08 00:42:28.446393 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:28.446399 | orchestrator | 2026-03-08 00:42:28.446406 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-08 00:42:28.446418 | orchestrator | Sunday 08 March 2026 00:42:28 +0000 (0:00:00.122) 0:00:13.704 ********** 2026-03-08 00:42:28.446424 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fb6eff58-5334-5828-9091-c0c39e64aeb1', 'data_vg': 'ceph-fb6eff58-5334-5828-9091-c0c39e64aeb1'})  2026-03-08 00:42:28.446431 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e3bef375-74a7-543b-9618-1787c99aecbb', 'data_vg': 'ceph-e3bef375-74a7-543b-9618-1787c99aecbb'})  2026-03-08 00:42:28.446437 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:28.446442 | orchestrator | 2026-03-08 00:42:28.446450 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-08 00:42:28.446456 | orchestrator | Sunday 08 March 2026 00:42:28 +0000 (0:00:00.132) 0:00:13.837 ********** 2026-03-08 00:42:28.446462 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:28.446468 | orchestrator | 2026-03-08 00:42:28.446473 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-08 00:42:28.446486 | orchestrator | Sunday 08 March 2026 00:42:28 +0000 (0:00:00.113) 0:00:13.951 ********** 2026-03-08 00:42:34.253240 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:34.253387 | orchestrator | 2026-03-08 00:42:34.253400 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-08 00:42:34.253409 | orchestrator | Sunday 08 March 2026 00:42:28 +0000 (0:00:00.130) 0:00:14.081 ********** 2026-03-08 00:42:34.253418 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:34.253427 | orchestrator | 2026-03-08 00:42:34.253436 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-08 00:42:34.253444 | orchestrator | Sunday 08 March 2026 00:42:28 +0000 (0:00:00.126) 0:00:14.208 ********** 2026-03-08 00:42:34.253453 | orchestrator | ok: [testbed-node-3] => { 2026-03-08 00:42:34.253462 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-08 00:42:34.253470 | orchestrator | } 2026-03-08 00:42:34.253478 | orchestrator | 2026-03-08 00:42:34.253486 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-08 00:42:34.253494 | orchestrator | Sunday 08 March 2026 00:42:28 +0000 (0:00:00.258) 0:00:14.466 ********** 2026-03-08 00:42:34.253503 | orchestrator | ok: [testbed-node-3] => { 2026-03-08 00:42:34.253512 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-08 00:42:34.253521 | orchestrator | } 2026-03-08 00:42:34.253529 | orchestrator | 2026-03-08 00:42:34.253538 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-08 00:42:34.253546 | orchestrator | Sunday 08 March 2026 00:42:29 +0000 (0:00:00.138) 0:00:14.605 ********** 2026-03-08 00:42:34.253555 | orchestrator | ok: [testbed-node-3] => { 2026-03-08 00:42:34.253564 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-08 00:42:34.253572 | orchestrator | } 2026-03-08 00:42:34.253581 | orchestrator | 2026-03-08 00:42:34.253589 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-08 00:42:34.253597 | orchestrator | Sunday 08 March 2026 00:42:29 +0000 (0:00:00.128) 0:00:14.734 ********** 2026-03-08 00:42:34.253605 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:42:34.253613 | orchestrator | 2026-03-08 00:42:34.253621 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-08 00:42:34.253628 | orchestrator | Sunday 08 March 2026 00:42:29 +0000 (0:00:00.662) 0:00:15.396 ********** 2026-03-08 00:42:34.253637 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:42:34.253645 | orchestrator | 2026-03-08 00:42:34.253653 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-08 00:42:34.253662 | orchestrator | Sunday 08 March 2026 00:42:30 +0000 (0:00:00.536) 0:00:15.933 ********** 2026-03-08 00:42:34.253670 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:42:34.253678 | orchestrator | 2026-03-08 00:42:34.253687 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-08 00:42:34.253696 | orchestrator | Sunday 08 March 2026 00:42:30 +0000 (0:00:00.518) 0:00:16.451 ********** 2026-03-08 00:42:34.253704 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:42:34.253711 | orchestrator | 2026-03-08 00:42:34.253758 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-08 00:42:34.253767 | orchestrator | Sunday 08 March 2026 00:42:31 +0000 (0:00:00.133) 0:00:16.585 ********** 2026-03-08 00:42:34.253775 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:34.253782 | orchestrator | 2026-03-08 00:42:34.253790 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-08 00:42:34.253798 | orchestrator | Sunday 08 March 2026 00:42:31 +0000 (0:00:00.084) 0:00:16.670 ********** 2026-03-08 00:42:34.253806 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:34.253813 | orchestrator | 2026-03-08 00:42:34.253821 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-08 00:42:34.253829 | orchestrator | Sunday 08 March 2026 00:42:31 +0000 (0:00:00.098) 0:00:16.768 ********** 2026-03-08 00:42:34.253836 | orchestrator | ok: [testbed-node-3] => { 2026-03-08 00:42:34.253844 | orchestrator |  "vgs_report": { 2026-03-08 00:42:34.253852 | orchestrator |  "vg": [] 2026-03-08 00:42:34.253861 | orchestrator |  } 2026-03-08 00:42:34.253869 | orchestrator | } 2026-03-08 00:42:34.253877 | orchestrator | 2026-03-08 00:42:34.253885 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-08 00:42:34.253893 | orchestrator | Sunday 08 March 2026 00:42:31 +0000 (0:00:00.119) 0:00:16.888 ********** 2026-03-08 00:42:34.253901 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:34.253908 | orchestrator | 2026-03-08 00:42:34.253917 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-08 00:42:34.253925 | orchestrator | Sunday 08 March 2026 00:42:31 +0000 (0:00:00.145) 0:00:17.034 ********** 2026-03-08 00:42:34.253933 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:34.253941 | orchestrator | 2026-03-08 00:42:34.253949 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-08 00:42:34.253956 | orchestrator | Sunday 08 March 2026 00:42:31 +0000 (0:00:00.126) 0:00:17.160 ********** 2026-03-08 00:42:34.253964 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:34.253972 | orchestrator | 2026-03-08 00:42:34.253980 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-08 00:42:34.253988 | orchestrator | Sunday 08 March 2026 00:42:31 +0000 (0:00:00.249) 0:00:17.410 ********** 2026-03-08 00:42:34.253996 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:34.254004 | orchestrator | 2026-03-08 00:42:34.254012 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-08 00:42:34.254073 | orchestrator | Sunday 08 March 2026 00:42:32 +0000 (0:00:00.124) 0:00:17.534 ********** 2026-03-08 00:42:34.254082 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:34.254090 | orchestrator | 2026-03-08 00:42:34.254099 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-08 00:42:34.254108 | orchestrator | Sunday 08 March 2026 00:42:32 +0000 (0:00:00.125) 0:00:17.660 ********** 2026-03-08 00:42:34.254116 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:34.254125 | orchestrator | 2026-03-08 00:42:34.254133 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-08 00:42:34.254142 | orchestrator | Sunday 08 March 2026 00:42:32 +0000 (0:00:00.140) 0:00:17.801 ********** 2026-03-08 00:42:34.254151 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:34.254159 | orchestrator | 2026-03-08 00:42:34.254167 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-08 00:42:34.254175 | orchestrator | Sunday 08 March 2026 00:42:32 +0000 (0:00:00.121) 0:00:17.922 ********** 2026-03-08 00:42:34.254201 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:34.254210 | orchestrator | 2026-03-08 00:42:34.254218 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-08 00:42:34.254225 | orchestrator | Sunday 08 March 2026 00:42:32 +0000 (0:00:00.132) 0:00:18.055 ********** 2026-03-08 00:42:34.254233 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:34.254240 | orchestrator | 2026-03-08 00:42:34.254248 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-08 00:42:34.254312 | orchestrator | Sunday 08 March 2026 00:42:32 +0000 (0:00:00.128) 0:00:18.183 ********** 2026-03-08 00:42:34.254320 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:34.254328 | orchestrator | 2026-03-08 00:42:34.254336 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-08 00:42:34.254343 | orchestrator | Sunday 08 March 2026 00:42:32 +0000 (0:00:00.126) 0:00:18.310 ********** 2026-03-08 00:42:34.254350 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:34.254358 | orchestrator | 2026-03-08 00:42:34.254381 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-08 00:42:34.254388 | orchestrator | Sunday 08 March 2026 00:42:32 +0000 (0:00:00.131) 0:00:18.442 ********** 2026-03-08 00:42:34.254396 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:34.254403 | orchestrator | 2026-03-08 00:42:34.254411 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-08 00:42:34.254418 | orchestrator | Sunday 08 March 2026 00:42:33 +0000 (0:00:00.127) 0:00:18.570 ********** 2026-03-08 00:42:34.254426 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:34.254434 | orchestrator | 2026-03-08 00:42:34.254442 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-08 00:42:34.254450 | orchestrator | Sunday 08 March 2026 00:42:33 +0000 (0:00:00.133) 0:00:18.703 ********** 2026-03-08 00:42:34.254457 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:34.254465 | orchestrator | 2026-03-08 00:42:34.254473 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-08 00:42:34.254480 | orchestrator | Sunday 08 March 2026 00:42:33 +0000 (0:00:00.124) 0:00:18.828 ********** 2026-03-08 00:42:34.254489 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fb6eff58-5334-5828-9091-c0c39e64aeb1', 'data_vg': 'ceph-fb6eff58-5334-5828-9091-c0c39e64aeb1'})  2026-03-08 00:42:34.254499 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e3bef375-74a7-543b-9618-1787c99aecbb', 'data_vg': 'ceph-e3bef375-74a7-543b-9618-1787c99aecbb'})  2026-03-08 00:42:34.254507 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:34.254515 | orchestrator | 2026-03-08 00:42:34.254523 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-08 00:42:34.254534 | orchestrator | Sunday 08 March 2026 00:42:33 +0000 (0:00:00.294) 0:00:19.122 ********** 2026-03-08 00:42:34.254542 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fb6eff58-5334-5828-9091-c0c39e64aeb1', 'data_vg': 'ceph-fb6eff58-5334-5828-9091-c0c39e64aeb1'})  2026-03-08 00:42:34.254550 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e3bef375-74a7-543b-9618-1787c99aecbb', 'data_vg': 'ceph-e3bef375-74a7-543b-9618-1787c99aecbb'})  2026-03-08 00:42:34.254557 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:34.254565 | orchestrator | 2026-03-08 00:42:34.254572 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-08 00:42:34.254580 | orchestrator | Sunday 08 March 2026 00:42:33 +0000 (0:00:00.147) 0:00:19.270 ********** 2026-03-08 00:42:34.254587 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fb6eff58-5334-5828-9091-c0c39e64aeb1', 'data_vg': 'ceph-fb6eff58-5334-5828-9091-c0c39e64aeb1'})  2026-03-08 00:42:34.254595 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e3bef375-74a7-543b-9618-1787c99aecbb', 'data_vg': 'ceph-e3bef375-74a7-543b-9618-1787c99aecbb'})  2026-03-08 00:42:34.254602 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:34.254625 | orchestrator | 2026-03-08 00:42:34.254633 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-08 00:42:34.254640 | orchestrator | Sunday 08 March 2026 00:42:33 +0000 (0:00:00.152) 0:00:19.422 ********** 2026-03-08 00:42:34.254647 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fb6eff58-5334-5828-9091-c0c39e64aeb1', 'data_vg': 'ceph-fb6eff58-5334-5828-9091-c0c39e64aeb1'})  2026-03-08 00:42:34.254655 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e3bef375-74a7-543b-9618-1787c99aecbb', 'data_vg': 'ceph-e3bef375-74a7-543b-9618-1787c99aecbb'})  2026-03-08 00:42:34.254669 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:34.254677 | orchestrator | 2026-03-08 00:42:34.254684 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-08 00:42:34.254691 | orchestrator | Sunday 08 March 2026 00:42:34 +0000 (0:00:00.144) 0:00:19.566 ********** 2026-03-08 00:42:34.254699 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fb6eff58-5334-5828-9091-c0c39e64aeb1', 'data_vg': 'ceph-fb6eff58-5334-5828-9091-c0c39e64aeb1'})  2026-03-08 00:42:34.254706 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e3bef375-74a7-543b-9618-1787c99aecbb', 'data_vg': 'ceph-e3bef375-74a7-543b-9618-1787c99aecbb'})  2026-03-08 00:42:34.254714 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:34.254722 | orchestrator | 2026-03-08 00:42:34.254729 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-08 00:42:34.254737 | orchestrator | Sunday 08 March 2026 00:42:34 +0000 (0:00:00.139) 0:00:19.706 ********** 2026-03-08 00:42:34.254752 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fb6eff58-5334-5828-9091-c0c39e64aeb1', 'data_vg': 'ceph-fb6eff58-5334-5828-9091-c0c39e64aeb1'})  2026-03-08 00:42:39.008997 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e3bef375-74a7-543b-9618-1787c99aecbb', 'data_vg': 'ceph-e3bef375-74a7-543b-9618-1787c99aecbb'})  2026-03-08 00:42:39.009097 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:39.009109 | orchestrator | 2026-03-08 00:42:39.009117 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-08 00:42:39.009125 | orchestrator | Sunday 08 March 2026 00:42:34 +0000 (0:00:00.132) 0:00:19.839 ********** 2026-03-08 00:42:39.009133 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fb6eff58-5334-5828-9091-c0c39e64aeb1', 'data_vg': 'ceph-fb6eff58-5334-5828-9091-c0c39e64aeb1'})  2026-03-08 00:42:39.009167 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e3bef375-74a7-543b-9618-1787c99aecbb', 'data_vg': 'ceph-e3bef375-74a7-543b-9618-1787c99aecbb'})  2026-03-08 00:42:39.009175 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:39.009182 | orchestrator | 2026-03-08 00:42:39.009189 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-08 00:42:39.009196 | orchestrator | Sunday 08 March 2026 00:42:34 +0000 (0:00:00.144) 0:00:19.984 ********** 2026-03-08 00:42:39.009204 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fb6eff58-5334-5828-9091-c0c39e64aeb1', 'data_vg': 'ceph-fb6eff58-5334-5828-9091-c0c39e64aeb1'})  2026-03-08 00:42:39.009210 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e3bef375-74a7-543b-9618-1787c99aecbb', 'data_vg': 'ceph-e3bef375-74a7-543b-9618-1787c99aecbb'})  2026-03-08 00:42:39.009217 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:39.009224 | orchestrator | 2026-03-08 00:42:39.009230 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-08 00:42:39.009236 | orchestrator | Sunday 08 March 2026 00:42:34 +0000 (0:00:00.136) 0:00:20.120 ********** 2026-03-08 00:42:39.009243 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:42:39.009309 | orchestrator | 2026-03-08 00:42:39.009316 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-08 00:42:39.009324 | orchestrator | Sunday 08 March 2026 00:42:35 +0000 (0:00:00.504) 0:00:20.625 ********** 2026-03-08 00:42:39.009330 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:42:39.009337 | orchestrator | 2026-03-08 00:42:39.009343 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-08 00:42:39.009365 | orchestrator | Sunday 08 March 2026 00:42:35 +0000 (0:00:00.497) 0:00:21.122 ********** 2026-03-08 00:42:39.009372 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:42:39.009379 | orchestrator | 2026-03-08 00:42:39.009386 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-08 00:42:39.009393 | orchestrator | Sunday 08 March 2026 00:42:35 +0000 (0:00:00.153) 0:00:21.276 ********** 2026-03-08 00:42:39.009418 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-e3bef375-74a7-543b-9618-1787c99aecbb', 'vg_name': 'ceph-e3bef375-74a7-543b-9618-1787c99aecbb'}) 2026-03-08 00:42:39.009427 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-fb6eff58-5334-5828-9091-c0c39e64aeb1', 'vg_name': 'ceph-fb6eff58-5334-5828-9091-c0c39e64aeb1'}) 2026-03-08 00:42:39.009434 | orchestrator | 2026-03-08 00:42:39.009439 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-08 00:42:39.009445 | orchestrator | Sunday 08 March 2026 00:42:35 +0000 (0:00:00.170) 0:00:21.446 ********** 2026-03-08 00:42:39.009451 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fb6eff58-5334-5828-9091-c0c39e64aeb1', 'data_vg': 'ceph-fb6eff58-5334-5828-9091-c0c39e64aeb1'})  2026-03-08 00:42:39.009457 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e3bef375-74a7-543b-9618-1787c99aecbb', 'data_vg': 'ceph-e3bef375-74a7-543b-9618-1787c99aecbb'})  2026-03-08 00:42:39.009463 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:39.009470 | orchestrator | 2026-03-08 00:42:39.009477 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-08 00:42:39.009483 | orchestrator | Sunday 08 March 2026 00:42:36 +0000 (0:00:00.291) 0:00:21.738 ********** 2026-03-08 00:42:39.009489 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fb6eff58-5334-5828-9091-c0c39e64aeb1', 'data_vg': 'ceph-fb6eff58-5334-5828-9091-c0c39e64aeb1'})  2026-03-08 00:42:39.009495 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e3bef375-74a7-543b-9618-1787c99aecbb', 'data_vg': 'ceph-e3bef375-74a7-543b-9618-1787c99aecbb'})  2026-03-08 00:42:39.009501 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:39.009508 | orchestrator | 2026-03-08 00:42:39.009514 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-08 00:42:39.009521 | orchestrator | Sunday 08 March 2026 00:42:36 +0000 (0:00:00.138) 0:00:21.877 ********** 2026-03-08 00:42:39.009527 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fb6eff58-5334-5828-9091-c0c39e64aeb1', 'data_vg': 'ceph-fb6eff58-5334-5828-9091-c0c39e64aeb1'})  2026-03-08 00:42:39.009534 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e3bef375-74a7-543b-9618-1787c99aecbb', 'data_vg': 'ceph-e3bef375-74a7-543b-9618-1787c99aecbb'})  2026-03-08 00:42:39.009540 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:42:39.009547 | orchestrator | 2026-03-08 00:42:39.009553 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-08 00:42:39.009559 | orchestrator | Sunday 08 March 2026 00:42:36 +0000 (0:00:00.136) 0:00:22.013 ********** 2026-03-08 00:42:39.009583 | orchestrator | ok: [testbed-node-3] => { 2026-03-08 00:42:39.009595 | orchestrator |  "lvm_report": { 2026-03-08 00:42:39.009607 | orchestrator |  "lv": [ 2026-03-08 00:42:39.009619 | orchestrator |  { 2026-03-08 00:42:39.009626 | orchestrator |  "lv_name": "osd-block-e3bef375-74a7-543b-9618-1787c99aecbb", 2026-03-08 00:42:39.009634 | orchestrator |  "vg_name": "ceph-e3bef375-74a7-543b-9618-1787c99aecbb" 2026-03-08 00:42:39.009640 | orchestrator |  }, 2026-03-08 00:42:39.009646 | orchestrator |  { 2026-03-08 00:42:39.009652 | orchestrator |  "lv_name": "osd-block-fb6eff58-5334-5828-9091-c0c39e64aeb1", 2026-03-08 00:42:39.009658 | orchestrator |  "vg_name": "ceph-fb6eff58-5334-5828-9091-c0c39e64aeb1" 2026-03-08 00:42:39.009665 | orchestrator |  } 2026-03-08 00:42:39.009671 | orchestrator |  ], 2026-03-08 00:42:39.009677 | orchestrator |  "pv": [ 2026-03-08 00:42:39.009683 | orchestrator |  { 2026-03-08 00:42:39.009689 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-08 00:42:39.009695 | orchestrator |  "vg_name": "ceph-fb6eff58-5334-5828-9091-c0c39e64aeb1" 2026-03-08 00:42:39.009701 | orchestrator |  }, 2026-03-08 00:42:39.009707 | orchestrator |  { 2026-03-08 00:42:39.009721 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-08 00:42:39.009727 | orchestrator |  "vg_name": "ceph-e3bef375-74a7-543b-9618-1787c99aecbb" 2026-03-08 00:42:39.009733 | orchestrator |  } 2026-03-08 00:42:39.009739 | orchestrator |  ] 2026-03-08 00:42:39.009744 | orchestrator |  } 2026-03-08 00:42:39.009750 | orchestrator | } 2026-03-08 00:42:39.009756 | orchestrator | 2026-03-08 00:42:39.009762 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-08 00:42:39.009769 | orchestrator | 2026-03-08 00:42:39.009775 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-08 00:42:39.009781 | orchestrator | Sunday 08 March 2026 00:42:36 +0000 (0:00:00.251) 0:00:22.265 ********** 2026-03-08 00:42:39.009788 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-08 00:42:39.009795 | orchestrator | 2026-03-08 00:42:39.009801 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-08 00:42:39.009808 | orchestrator | Sunday 08 March 2026 00:42:36 +0000 (0:00:00.220) 0:00:22.486 ********** 2026-03-08 00:42:39.009814 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:42:39.009821 | orchestrator | 2026-03-08 00:42:39.009827 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:42:39.009834 | orchestrator | Sunday 08 March 2026 00:42:37 +0000 (0:00:00.213) 0:00:22.699 ********** 2026-03-08 00:42:39.009841 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-03-08 00:42:39.009848 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-03-08 00:42:39.009856 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-03-08 00:42:39.009863 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-03-08 00:42:39.009869 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-03-08 00:42:39.009876 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-03-08 00:42:39.009883 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-03-08 00:42:39.009890 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-03-08 00:42:39.009897 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-03-08 00:42:39.009903 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-03-08 00:42:39.009909 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-03-08 00:42:39.009915 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-03-08 00:42:39.009921 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-03-08 00:42:39.009928 | orchestrator | 2026-03-08 00:42:39.009934 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:42:39.009940 | orchestrator | Sunday 08 March 2026 00:42:37 +0000 (0:00:00.409) 0:00:23.109 ********** 2026-03-08 00:42:39.009947 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:42:39.009953 | orchestrator | 2026-03-08 00:42:39.009959 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:42:39.009973 | orchestrator | Sunday 08 March 2026 00:42:37 +0000 (0:00:00.167) 0:00:23.276 ********** 2026-03-08 00:42:39.009980 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:42:39.009986 | orchestrator | 2026-03-08 00:42:39.009992 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:42:39.009998 | orchestrator | Sunday 08 March 2026 00:42:37 +0000 (0:00:00.183) 0:00:23.459 ********** 2026-03-08 00:42:39.010004 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:42:39.010010 | orchestrator | 2026-03-08 00:42:39.010057 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:42:39.010071 | orchestrator | Sunday 08 March 2026 00:42:38 +0000 (0:00:00.452) 0:00:23.912 ********** 2026-03-08 00:42:39.010077 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:42:39.010084 | orchestrator | 2026-03-08 00:42:39.010090 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:42:39.010097 | orchestrator | Sunday 08 March 2026 00:42:38 +0000 (0:00:00.172) 0:00:24.084 ********** 2026-03-08 00:42:39.010104 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:42:39.010110 | orchestrator | 2026-03-08 00:42:39.010117 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:42:39.010123 | orchestrator | Sunday 08 March 2026 00:42:38 +0000 (0:00:00.230) 0:00:24.315 ********** 2026-03-08 00:42:39.010130 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:42:39.010136 | orchestrator | 2026-03-08 00:42:39.010150 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:42:50.703143 | orchestrator | Sunday 08 March 2026 00:42:39 +0000 (0:00:00.200) 0:00:24.516 ********** 2026-03-08 00:42:50.703436 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:42:50.703454 | orchestrator | 2026-03-08 00:42:50.703467 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:42:50.703479 | orchestrator | Sunday 08 March 2026 00:42:39 +0000 (0:00:00.208) 0:00:24.724 ********** 2026-03-08 00:42:50.703489 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:42:50.703500 | orchestrator | 2026-03-08 00:42:50.703511 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:42:50.703522 | orchestrator | Sunday 08 March 2026 00:42:39 +0000 (0:00:00.268) 0:00:24.992 ********** 2026-03-08 00:42:50.703533 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_544edfd2-ddc4-4596-85df-1c9b9e7c3b59) 2026-03-08 00:42:50.703545 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_544edfd2-ddc4-4596-85df-1c9b9e7c3b59) 2026-03-08 00:42:50.703556 | orchestrator | 2026-03-08 00:42:50.703567 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:42:50.703577 | orchestrator | Sunday 08 March 2026 00:42:39 +0000 (0:00:00.404) 0:00:25.397 ********** 2026-03-08 00:42:50.703588 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_581ffd65-22a4-4ef2-934b-fe47abf1be5c) 2026-03-08 00:42:50.703599 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_581ffd65-22a4-4ef2-934b-fe47abf1be5c) 2026-03-08 00:42:50.703610 | orchestrator | 2026-03-08 00:42:50.703621 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:42:50.703634 | orchestrator | Sunday 08 March 2026 00:42:40 +0000 (0:00:00.423) 0:00:25.821 ********** 2026-03-08 00:42:50.703647 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_2f73f377-a3b9-4553-a6d0-e21973e3a5e5) 2026-03-08 00:42:50.703659 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_2f73f377-a3b9-4553-a6d0-e21973e3a5e5) 2026-03-08 00:42:50.703671 | orchestrator | 2026-03-08 00:42:50.703730 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:42:50.703773 | orchestrator | Sunday 08 March 2026 00:42:40 +0000 (0:00:00.535) 0:00:26.356 ********** 2026-03-08 00:42:50.703803 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_1d4cf331-77e8-4e4e-b490-10f0636e01e9) 2026-03-08 00:42:50.703817 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_1d4cf331-77e8-4e4e-b490-10f0636e01e9) 2026-03-08 00:42:50.703830 | orchestrator | 2026-03-08 00:42:50.703843 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:42:50.703855 | orchestrator | Sunday 08 March 2026 00:42:41 +0000 (0:00:00.806) 0:00:27.163 ********** 2026-03-08 00:42:50.703868 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-08 00:42:50.703880 | orchestrator | 2026-03-08 00:42:50.703892 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:42:50.703913 | orchestrator | Sunday 08 March 2026 00:42:42 +0000 (0:00:00.577) 0:00:27.741 ********** 2026-03-08 00:42:50.703950 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-03-08 00:42:50.703963 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-03-08 00:42:50.703973 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-03-08 00:42:50.703984 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-03-08 00:42:50.703994 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-03-08 00:42:50.704005 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-03-08 00:42:50.704015 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-03-08 00:42:50.704026 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-03-08 00:42:50.704036 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-03-08 00:42:50.704047 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-03-08 00:42:50.704057 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-03-08 00:42:50.704068 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-03-08 00:42:50.704079 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-03-08 00:42:50.704090 | orchestrator | 2026-03-08 00:42:50.704100 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:42:50.704111 | orchestrator | Sunday 08 March 2026 00:42:43 +0000 (0:00:01.077) 0:00:28.818 ********** 2026-03-08 00:42:50.704122 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:42:50.704133 | orchestrator | 2026-03-08 00:42:50.704143 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:42:50.704154 | orchestrator | Sunday 08 March 2026 00:42:43 +0000 (0:00:00.199) 0:00:29.018 ********** 2026-03-08 00:42:50.704165 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:42:50.704175 | orchestrator | 2026-03-08 00:42:50.704186 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:42:50.704197 | orchestrator | Sunday 08 March 2026 00:42:43 +0000 (0:00:00.205) 0:00:29.223 ********** 2026-03-08 00:42:50.704207 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:42:50.704218 | orchestrator | 2026-03-08 00:42:50.704275 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:42:50.704289 | orchestrator | Sunday 08 March 2026 00:42:43 +0000 (0:00:00.228) 0:00:29.453 ********** 2026-03-08 00:42:50.704300 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:42:50.704311 | orchestrator | 2026-03-08 00:42:50.704321 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:42:50.704332 | orchestrator | Sunday 08 March 2026 00:42:44 +0000 (0:00:00.212) 0:00:29.665 ********** 2026-03-08 00:42:50.704342 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:42:50.704353 | orchestrator | 2026-03-08 00:42:50.704363 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:42:50.704374 | orchestrator | Sunday 08 March 2026 00:42:44 +0000 (0:00:00.229) 0:00:29.894 ********** 2026-03-08 00:42:50.704384 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:42:50.704395 | orchestrator | 2026-03-08 00:42:50.704405 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:42:50.704416 | orchestrator | Sunday 08 March 2026 00:42:44 +0000 (0:00:00.210) 0:00:30.104 ********** 2026-03-08 00:42:50.704427 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:42:50.704437 | orchestrator | 2026-03-08 00:42:50.704448 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:42:50.704458 | orchestrator | Sunday 08 March 2026 00:42:44 +0000 (0:00:00.222) 0:00:30.326 ********** 2026-03-08 00:42:50.704477 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:42:50.704488 | orchestrator | 2026-03-08 00:42:50.704499 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:42:50.704510 | orchestrator | Sunday 08 March 2026 00:42:45 +0000 (0:00:00.219) 0:00:30.545 ********** 2026-03-08 00:42:50.704520 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-03-08 00:42:50.704531 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-03-08 00:42:50.704542 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-03-08 00:42:50.704552 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-03-08 00:42:50.704563 | orchestrator | 2026-03-08 00:42:50.704573 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:42:50.704584 | orchestrator | Sunday 08 March 2026 00:42:45 +0000 (0:00:00.841) 0:00:31.387 ********** 2026-03-08 00:42:50.704595 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:42:50.704605 | orchestrator | 2026-03-08 00:42:50.704616 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:42:50.704627 | orchestrator | Sunday 08 March 2026 00:42:46 +0000 (0:00:00.180) 0:00:31.568 ********** 2026-03-08 00:42:50.704643 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:42:50.704654 | orchestrator | 2026-03-08 00:42:50.704664 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:42:50.704675 | orchestrator | Sunday 08 March 2026 00:42:46 +0000 (0:00:00.621) 0:00:32.189 ********** 2026-03-08 00:42:50.704685 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:42:50.704696 | orchestrator | 2026-03-08 00:42:50.704707 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:42:50.704717 | orchestrator | Sunday 08 March 2026 00:42:46 +0000 (0:00:00.216) 0:00:32.405 ********** 2026-03-08 00:42:50.704728 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:42:50.704738 | orchestrator | 2026-03-08 00:42:50.704749 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-08 00:42:50.704759 | orchestrator | Sunday 08 March 2026 00:42:47 +0000 (0:00:00.211) 0:00:32.617 ********** 2026-03-08 00:42:50.704770 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:42:50.704780 | orchestrator | 2026-03-08 00:42:50.704791 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-08 00:42:50.704802 | orchestrator | Sunday 08 March 2026 00:42:47 +0000 (0:00:00.137) 0:00:32.755 ********** 2026-03-08 00:42:50.704812 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e9614fc2-8329-596c-937c-60ceb39d5fd3'}}) 2026-03-08 00:42:50.704823 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'eb569be8-41bf-5aa1-acb9-f145abad3137'}}) 2026-03-08 00:42:50.704834 | orchestrator | 2026-03-08 00:42:50.704845 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-08 00:42:50.704855 | orchestrator | Sunday 08 March 2026 00:42:47 +0000 (0:00:00.209) 0:00:32.965 ********** 2026-03-08 00:42:50.704867 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-e9614fc2-8329-596c-937c-60ceb39d5fd3', 'data_vg': 'ceph-e9614fc2-8329-596c-937c-60ceb39d5fd3'}) 2026-03-08 00:42:50.704879 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-eb569be8-41bf-5aa1-acb9-f145abad3137', 'data_vg': 'ceph-eb569be8-41bf-5aa1-acb9-f145abad3137'}) 2026-03-08 00:42:50.704889 | orchestrator | 2026-03-08 00:42:50.704900 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-08 00:42:50.704911 | orchestrator | Sunday 08 March 2026 00:42:49 +0000 (0:00:01.817) 0:00:34.782 ********** 2026-03-08 00:42:50.704921 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e9614fc2-8329-596c-937c-60ceb39d5fd3', 'data_vg': 'ceph-e9614fc2-8329-596c-937c-60ceb39d5fd3'})  2026-03-08 00:42:50.704933 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-eb569be8-41bf-5aa1-acb9-f145abad3137', 'data_vg': 'ceph-eb569be8-41bf-5aa1-acb9-f145abad3137'})  2026-03-08 00:42:50.704950 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:42:50.704961 | orchestrator | 2026-03-08 00:42:50.704972 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-08 00:42:50.704982 | orchestrator | Sunday 08 March 2026 00:42:49 +0000 (0:00:00.162) 0:00:34.945 ********** 2026-03-08 00:42:50.704993 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-e9614fc2-8329-596c-937c-60ceb39d5fd3', 'data_vg': 'ceph-e9614fc2-8329-596c-937c-60ceb39d5fd3'}) 2026-03-08 00:42:50.705010 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-eb569be8-41bf-5aa1-acb9-f145abad3137', 'data_vg': 'ceph-eb569be8-41bf-5aa1-acb9-f145abad3137'}) 2026-03-08 00:42:56.335141 | orchestrator | 2026-03-08 00:42:56.335303 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-08 00:42:56.335326 | orchestrator | Sunday 08 March 2026 00:42:50 +0000 (0:00:01.347) 0:00:36.292 ********** 2026-03-08 00:42:56.335342 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e9614fc2-8329-596c-937c-60ceb39d5fd3', 'data_vg': 'ceph-e9614fc2-8329-596c-937c-60ceb39d5fd3'})  2026-03-08 00:42:56.335353 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-eb569be8-41bf-5aa1-acb9-f145abad3137', 'data_vg': 'ceph-eb569be8-41bf-5aa1-acb9-f145abad3137'})  2026-03-08 00:42:56.335363 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:42:56.335373 | orchestrator | 2026-03-08 00:42:56.335382 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-08 00:42:56.335390 | orchestrator | Sunday 08 March 2026 00:42:50 +0000 (0:00:00.147) 0:00:36.440 ********** 2026-03-08 00:42:56.335399 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:42:56.335408 | orchestrator | 2026-03-08 00:42:56.335416 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-08 00:42:56.335425 | orchestrator | Sunday 08 March 2026 00:42:51 +0000 (0:00:00.143) 0:00:36.584 ********** 2026-03-08 00:42:56.335433 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e9614fc2-8329-596c-937c-60ceb39d5fd3', 'data_vg': 'ceph-e9614fc2-8329-596c-937c-60ceb39d5fd3'})  2026-03-08 00:42:56.335442 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-eb569be8-41bf-5aa1-acb9-f145abad3137', 'data_vg': 'ceph-eb569be8-41bf-5aa1-acb9-f145abad3137'})  2026-03-08 00:42:56.335453 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:42:56.335467 | orchestrator | 2026-03-08 00:42:56.335481 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-08 00:42:56.335494 | orchestrator | Sunday 08 March 2026 00:42:51 +0000 (0:00:00.159) 0:00:36.744 ********** 2026-03-08 00:42:56.335508 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:42:56.335523 | orchestrator | 2026-03-08 00:42:56.335532 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-08 00:42:56.335541 | orchestrator | Sunday 08 March 2026 00:42:51 +0000 (0:00:00.138) 0:00:36.882 ********** 2026-03-08 00:42:56.335550 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e9614fc2-8329-596c-937c-60ceb39d5fd3', 'data_vg': 'ceph-e9614fc2-8329-596c-937c-60ceb39d5fd3'})  2026-03-08 00:42:56.335559 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-eb569be8-41bf-5aa1-acb9-f145abad3137', 'data_vg': 'ceph-eb569be8-41bf-5aa1-acb9-f145abad3137'})  2026-03-08 00:42:56.335568 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:42:56.335576 | orchestrator | 2026-03-08 00:42:56.335585 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-08 00:42:56.335594 | orchestrator | Sunday 08 March 2026 00:42:51 +0000 (0:00:00.377) 0:00:37.260 ********** 2026-03-08 00:42:56.335602 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:42:56.335611 | orchestrator | 2026-03-08 00:42:56.335620 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-08 00:42:56.335631 | orchestrator | Sunday 08 March 2026 00:42:51 +0000 (0:00:00.142) 0:00:37.402 ********** 2026-03-08 00:42:56.335646 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e9614fc2-8329-596c-937c-60ceb39d5fd3', 'data_vg': 'ceph-e9614fc2-8329-596c-937c-60ceb39d5fd3'})  2026-03-08 00:42:56.335685 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-eb569be8-41bf-5aa1-acb9-f145abad3137', 'data_vg': 'ceph-eb569be8-41bf-5aa1-acb9-f145abad3137'})  2026-03-08 00:42:56.335700 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:42:56.335714 | orchestrator | 2026-03-08 00:42:56.335730 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-08 00:42:56.335765 | orchestrator | Sunday 08 March 2026 00:42:52 +0000 (0:00:00.160) 0:00:37.563 ********** 2026-03-08 00:42:56.335781 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:42:56.335797 | orchestrator | 2026-03-08 00:42:56.335811 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-08 00:42:56.335827 | orchestrator | Sunday 08 March 2026 00:42:52 +0000 (0:00:00.147) 0:00:37.711 ********** 2026-03-08 00:42:56.335843 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e9614fc2-8329-596c-937c-60ceb39d5fd3', 'data_vg': 'ceph-e9614fc2-8329-596c-937c-60ceb39d5fd3'})  2026-03-08 00:42:56.335854 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-eb569be8-41bf-5aa1-acb9-f145abad3137', 'data_vg': 'ceph-eb569be8-41bf-5aa1-acb9-f145abad3137'})  2026-03-08 00:42:56.335869 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:42:56.335883 | orchestrator | 2026-03-08 00:42:56.335896 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-08 00:42:56.335910 | orchestrator | Sunday 08 March 2026 00:42:52 +0000 (0:00:00.161) 0:00:37.872 ********** 2026-03-08 00:42:56.335923 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e9614fc2-8329-596c-937c-60ceb39d5fd3', 'data_vg': 'ceph-e9614fc2-8329-596c-937c-60ceb39d5fd3'})  2026-03-08 00:42:56.335938 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-eb569be8-41bf-5aa1-acb9-f145abad3137', 'data_vg': 'ceph-eb569be8-41bf-5aa1-acb9-f145abad3137'})  2026-03-08 00:42:56.335951 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:42:56.335965 | orchestrator | 2026-03-08 00:42:56.335980 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-08 00:42:56.336013 | orchestrator | Sunday 08 March 2026 00:42:52 +0000 (0:00:00.166) 0:00:38.039 ********** 2026-03-08 00:42:56.336028 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e9614fc2-8329-596c-937c-60ceb39d5fd3', 'data_vg': 'ceph-e9614fc2-8329-596c-937c-60ceb39d5fd3'})  2026-03-08 00:42:56.336044 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-eb569be8-41bf-5aa1-acb9-f145abad3137', 'data_vg': 'ceph-eb569be8-41bf-5aa1-acb9-f145abad3137'})  2026-03-08 00:42:56.336057 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:42:56.336071 | orchestrator | 2026-03-08 00:42:56.336085 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-08 00:42:56.336099 | orchestrator | Sunday 08 March 2026 00:42:52 +0000 (0:00:00.188) 0:00:38.228 ********** 2026-03-08 00:42:56.336114 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:42:56.336124 | orchestrator | 2026-03-08 00:42:56.336136 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-08 00:42:56.336150 | orchestrator | Sunday 08 March 2026 00:42:52 +0000 (0:00:00.141) 0:00:38.369 ********** 2026-03-08 00:42:56.336165 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:42:56.336180 | orchestrator | 2026-03-08 00:42:56.336190 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-08 00:42:56.336199 | orchestrator | Sunday 08 March 2026 00:42:52 +0000 (0:00:00.127) 0:00:38.497 ********** 2026-03-08 00:42:56.336207 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:42:56.336215 | orchestrator | 2026-03-08 00:42:56.336224 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-08 00:42:56.336256 | orchestrator | Sunday 08 March 2026 00:42:53 +0000 (0:00:00.142) 0:00:38.640 ********** 2026-03-08 00:42:56.336265 | orchestrator | ok: [testbed-node-4] => { 2026-03-08 00:42:56.336273 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-08 00:42:56.336293 | orchestrator | } 2026-03-08 00:42:56.336302 | orchestrator | 2026-03-08 00:42:56.336311 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-08 00:42:56.336319 | orchestrator | Sunday 08 March 2026 00:42:53 +0000 (0:00:00.126) 0:00:38.767 ********** 2026-03-08 00:42:56.336328 | orchestrator | ok: [testbed-node-4] => { 2026-03-08 00:42:56.336336 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-08 00:42:56.336345 | orchestrator | } 2026-03-08 00:42:56.336353 | orchestrator | 2026-03-08 00:42:56.336379 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-08 00:42:56.336388 | orchestrator | Sunday 08 March 2026 00:42:53 +0000 (0:00:00.118) 0:00:38.885 ********** 2026-03-08 00:42:56.336397 | orchestrator | ok: [testbed-node-4] => { 2026-03-08 00:42:56.336406 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-08 00:42:56.336414 | orchestrator | } 2026-03-08 00:42:56.336423 | orchestrator | 2026-03-08 00:42:56.336432 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-08 00:42:56.336441 | orchestrator | Sunday 08 March 2026 00:42:53 +0000 (0:00:00.275) 0:00:39.160 ********** 2026-03-08 00:42:56.336541 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:42:56.336560 | orchestrator | 2026-03-08 00:42:56.336576 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-08 00:42:56.336590 | orchestrator | Sunday 08 March 2026 00:42:54 +0000 (0:00:00.488) 0:00:39.649 ********** 2026-03-08 00:42:56.336604 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:42:56.336618 | orchestrator | 2026-03-08 00:42:56.336631 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-08 00:42:56.336645 | orchestrator | Sunday 08 March 2026 00:42:54 +0000 (0:00:00.643) 0:00:40.293 ********** 2026-03-08 00:42:56.336659 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:42:56.336673 | orchestrator | 2026-03-08 00:42:56.336689 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-08 00:42:56.336703 | orchestrator | Sunday 08 March 2026 00:42:55 +0000 (0:00:00.501) 0:00:40.794 ********** 2026-03-08 00:42:56.336719 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:42:56.336734 | orchestrator | 2026-03-08 00:42:56.336749 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-08 00:42:56.336761 | orchestrator | Sunday 08 March 2026 00:42:55 +0000 (0:00:00.139) 0:00:40.934 ********** 2026-03-08 00:42:56.336769 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:42:56.336778 | orchestrator | 2026-03-08 00:42:56.336786 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-08 00:42:56.336795 | orchestrator | Sunday 08 March 2026 00:42:55 +0000 (0:00:00.143) 0:00:41.078 ********** 2026-03-08 00:42:56.336803 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:42:56.336812 | orchestrator | 2026-03-08 00:42:56.336820 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-08 00:42:56.336829 | orchestrator | Sunday 08 March 2026 00:42:55 +0000 (0:00:00.126) 0:00:41.205 ********** 2026-03-08 00:42:56.336837 | orchestrator | ok: [testbed-node-4] => { 2026-03-08 00:42:56.336846 | orchestrator |  "vgs_report": { 2026-03-08 00:42:56.336854 | orchestrator |  "vg": [] 2026-03-08 00:42:56.336863 | orchestrator |  } 2026-03-08 00:42:56.336872 | orchestrator | } 2026-03-08 00:42:56.336880 | orchestrator | 2026-03-08 00:42:56.336889 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-08 00:42:56.336898 | orchestrator | Sunday 08 March 2026 00:42:55 +0000 (0:00:00.127) 0:00:41.332 ********** 2026-03-08 00:42:56.336906 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:42:56.336915 | orchestrator | 2026-03-08 00:42:56.336923 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-08 00:42:56.336932 | orchestrator | Sunday 08 March 2026 00:42:55 +0000 (0:00:00.127) 0:00:41.459 ********** 2026-03-08 00:42:56.336940 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:42:56.336949 | orchestrator | 2026-03-08 00:42:56.336957 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-08 00:42:56.336975 | orchestrator | Sunday 08 March 2026 00:42:56 +0000 (0:00:00.129) 0:00:41.589 ********** 2026-03-08 00:42:56.336983 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:42:56.336992 | orchestrator | 2026-03-08 00:42:56.337000 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-08 00:42:56.337011 | orchestrator | Sunday 08 March 2026 00:42:56 +0000 (0:00:00.122) 0:00:41.712 ********** 2026-03-08 00:42:56.337026 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:42:56.337041 | orchestrator | 2026-03-08 00:42:56.337068 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-08 00:43:01.567895 | orchestrator | Sunday 08 March 2026 00:42:56 +0000 (0:00:00.128) 0:00:41.840 ********** 2026-03-08 00:43:01.568012 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:43:01.568030 | orchestrator | 2026-03-08 00:43:01.568043 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-08 00:43:01.568055 | orchestrator | Sunday 08 March 2026 00:42:56 +0000 (0:00:00.329) 0:00:42.169 ********** 2026-03-08 00:43:01.568066 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:43:01.568077 | orchestrator | 2026-03-08 00:43:01.568088 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-08 00:43:01.568099 | orchestrator | Sunday 08 March 2026 00:42:56 +0000 (0:00:00.143) 0:00:42.313 ********** 2026-03-08 00:43:01.568110 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:43:01.568121 | orchestrator | 2026-03-08 00:43:01.568131 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-08 00:43:01.568142 | orchestrator | Sunday 08 March 2026 00:42:56 +0000 (0:00:00.158) 0:00:42.472 ********** 2026-03-08 00:43:01.568153 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:43:01.568164 | orchestrator | 2026-03-08 00:43:01.568175 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-08 00:43:01.568186 | orchestrator | Sunday 08 March 2026 00:42:57 +0000 (0:00:00.159) 0:00:42.632 ********** 2026-03-08 00:43:01.568197 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:43:01.568208 | orchestrator | 2026-03-08 00:43:01.568218 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-08 00:43:01.568294 | orchestrator | Sunday 08 March 2026 00:42:57 +0000 (0:00:00.174) 0:00:42.806 ********** 2026-03-08 00:43:01.568307 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:43:01.568317 | orchestrator | 2026-03-08 00:43:01.568328 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-08 00:43:01.568339 | orchestrator | Sunday 08 March 2026 00:42:57 +0000 (0:00:00.171) 0:00:42.978 ********** 2026-03-08 00:43:01.568350 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:43:01.568361 | orchestrator | 2026-03-08 00:43:01.568371 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-08 00:43:01.568382 | orchestrator | Sunday 08 March 2026 00:42:57 +0000 (0:00:00.188) 0:00:43.167 ********** 2026-03-08 00:43:01.568412 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:43:01.568426 | orchestrator | 2026-03-08 00:43:01.568438 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-08 00:43:01.568451 | orchestrator | Sunday 08 March 2026 00:42:57 +0000 (0:00:00.198) 0:00:43.365 ********** 2026-03-08 00:43:01.568463 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:43:01.568476 | orchestrator | 2026-03-08 00:43:01.568490 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-08 00:43:01.568503 | orchestrator | Sunday 08 March 2026 00:42:58 +0000 (0:00:00.185) 0:00:43.552 ********** 2026-03-08 00:43:01.568516 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:43:01.568529 | orchestrator | 2026-03-08 00:43:01.568542 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-08 00:43:01.568556 | orchestrator | Sunday 08 March 2026 00:42:58 +0000 (0:00:00.177) 0:00:43.729 ********** 2026-03-08 00:43:01.568569 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e9614fc2-8329-596c-937c-60ceb39d5fd3', 'data_vg': 'ceph-e9614fc2-8329-596c-937c-60ceb39d5fd3'})  2026-03-08 00:43:01.568602 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-eb569be8-41bf-5aa1-acb9-f145abad3137', 'data_vg': 'ceph-eb569be8-41bf-5aa1-acb9-f145abad3137'})  2026-03-08 00:43:01.568616 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:43:01.568628 | orchestrator | 2026-03-08 00:43:01.568641 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-08 00:43:01.568653 | orchestrator | Sunday 08 March 2026 00:42:58 +0000 (0:00:00.188) 0:00:43.918 ********** 2026-03-08 00:43:01.568667 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e9614fc2-8329-596c-937c-60ceb39d5fd3', 'data_vg': 'ceph-e9614fc2-8329-596c-937c-60ceb39d5fd3'})  2026-03-08 00:43:01.568680 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-eb569be8-41bf-5aa1-acb9-f145abad3137', 'data_vg': 'ceph-eb569be8-41bf-5aa1-acb9-f145abad3137'})  2026-03-08 00:43:01.568695 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:43:01.568712 | orchestrator | 2026-03-08 00:43:01.568729 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-08 00:43:01.568748 | orchestrator | Sunday 08 March 2026 00:42:58 +0000 (0:00:00.155) 0:00:44.074 ********** 2026-03-08 00:43:01.568766 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e9614fc2-8329-596c-937c-60ceb39d5fd3', 'data_vg': 'ceph-e9614fc2-8329-596c-937c-60ceb39d5fd3'})  2026-03-08 00:43:01.568785 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-eb569be8-41bf-5aa1-acb9-f145abad3137', 'data_vg': 'ceph-eb569be8-41bf-5aa1-acb9-f145abad3137'})  2026-03-08 00:43:01.568804 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:43:01.568815 | orchestrator | 2026-03-08 00:43:01.568825 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-08 00:43:01.568836 | orchestrator | Sunday 08 March 2026 00:42:59 +0000 (0:00:00.471) 0:00:44.545 ********** 2026-03-08 00:43:01.568847 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e9614fc2-8329-596c-937c-60ceb39d5fd3', 'data_vg': 'ceph-e9614fc2-8329-596c-937c-60ceb39d5fd3'})  2026-03-08 00:43:01.568858 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-eb569be8-41bf-5aa1-acb9-f145abad3137', 'data_vg': 'ceph-eb569be8-41bf-5aa1-acb9-f145abad3137'})  2026-03-08 00:43:01.568869 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:43:01.568879 | orchestrator | 2026-03-08 00:43:01.568910 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-08 00:43:01.568922 | orchestrator | Sunday 08 March 2026 00:42:59 +0000 (0:00:00.183) 0:00:44.729 ********** 2026-03-08 00:43:01.568933 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e9614fc2-8329-596c-937c-60ceb39d5fd3', 'data_vg': 'ceph-e9614fc2-8329-596c-937c-60ceb39d5fd3'})  2026-03-08 00:43:01.568944 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-eb569be8-41bf-5aa1-acb9-f145abad3137', 'data_vg': 'ceph-eb569be8-41bf-5aa1-acb9-f145abad3137'})  2026-03-08 00:43:01.568955 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:43:01.568965 | orchestrator | 2026-03-08 00:43:01.568976 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-08 00:43:01.568986 | orchestrator | Sunday 08 March 2026 00:42:59 +0000 (0:00:00.163) 0:00:44.892 ********** 2026-03-08 00:43:01.568997 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e9614fc2-8329-596c-937c-60ceb39d5fd3', 'data_vg': 'ceph-e9614fc2-8329-596c-937c-60ceb39d5fd3'})  2026-03-08 00:43:01.569008 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-eb569be8-41bf-5aa1-acb9-f145abad3137', 'data_vg': 'ceph-eb569be8-41bf-5aa1-acb9-f145abad3137'})  2026-03-08 00:43:01.569018 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:43:01.569029 | orchestrator | 2026-03-08 00:43:01.569039 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-08 00:43:01.569050 | orchestrator | Sunday 08 March 2026 00:42:59 +0000 (0:00:00.141) 0:00:45.034 ********** 2026-03-08 00:43:01.569061 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e9614fc2-8329-596c-937c-60ceb39d5fd3', 'data_vg': 'ceph-e9614fc2-8329-596c-937c-60ceb39d5fd3'})  2026-03-08 00:43:01.569080 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-eb569be8-41bf-5aa1-acb9-f145abad3137', 'data_vg': 'ceph-eb569be8-41bf-5aa1-acb9-f145abad3137'})  2026-03-08 00:43:01.569091 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:43:01.569102 | orchestrator | 2026-03-08 00:43:01.569113 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-08 00:43:01.569123 | orchestrator | Sunday 08 March 2026 00:42:59 +0000 (0:00:00.154) 0:00:45.188 ********** 2026-03-08 00:43:01.569134 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e9614fc2-8329-596c-937c-60ceb39d5fd3', 'data_vg': 'ceph-e9614fc2-8329-596c-937c-60ceb39d5fd3'})  2026-03-08 00:43:01.569145 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-eb569be8-41bf-5aa1-acb9-f145abad3137', 'data_vg': 'ceph-eb569be8-41bf-5aa1-acb9-f145abad3137'})  2026-03-08 00:43:01.569156 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:43:01.569167 | orchestrator | 2026-03-08 00:43:01.569177 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-08 00:43:01.569188 | orchestrator | Sunday 08 March 2026 00:42:59 +0000 (0:00:00.154) 0:00:45.343 ********** 2026-03-08 00:43:01.569198 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:43:01.569280 | orchestrator | 2026-03-08 00:43:01.569299 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-08 00:43:01.569311 | orchestrator | Sunday 08 March 2026 00:43:00 +0000 (0:00:00.637) 0:00:45.981 ********** 2026-03-08 00:43:01.569321 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:43:01.569332 | orchestrator | 2026-03-08 00:43:01.569342 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-08 00:43:01.569353 | orchestrator | Sunday 08 March 2026 00:43:00 +0000 (0:00:00.524) 0:00:46.506 ********** 2026-03-08 00:43:01.569363 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:43:01.569374 | orchestrator | 2026-03-08 00:43:01.569384 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-08 00:43:01.569395 | orchestrator | Sunday 08 March 2026 00:43:01 +0000 (0:00:00.152) 0:00:46.658 ********** 2026-03-08 00:43:01.569405 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-e9614fc2-8329-596c-937c-60ceb39d5fd3', 'vg_name': 'ceph-e9614fc2-8329-596c-937c-60ceb39d5fd3'}) 2026-03-08 00:43:01.569417 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-eb569be8-41bf-5aa1-acb9-f145abad3137', 'vg_name': 'ceph-eb569be8-41bf-5aa1-acb9-f145abad3137'}) 2026-03-08 00:43:01.569428 | orchestrator | 2026-03-08 00:43:01.569438 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-08 00:43:01.569449 | orchestrator | Sunday 08 March 2026 00:43:01 +0000 (0:00:00.175) 0:00:46.834 ********** 2026-03-08 00:43:01.569459 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e9614fc2-8329-596c-937c-60ceb39d5fd3', 'data_vg': 'ceph-e9614fc2-8329-596c-937c-60ceb39d5fd3'})  2026-03-08 00:43:01.569470 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-eb569be8-41bf-5aa1-acb9-f145abad3137', 'data_vg': 'ceph-eb569be8-41bf-5aa1-acb9-f145abad3137'})  2026-03-08 00:43:01.569480 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:43:01.569491 | orchestrator | 2026-03-08 00:43:01.569502 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-08 00:43:01.569512 | orchestrator | Sunday 08 March 2026 00:43:01 +0000 (0:00:00.158) 0:00:46.992 ********** 2026-03-08 00:43:01.569523 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e9614fc2-8329-596c-937c-60ceb39d5fd3', 'data_vg': 'ceph-e9614fc2-8329-596c-937c-60ceb39d5fd3'})  2026-03-08 00:43:01.569542 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-eb569be8-41bf-5aa1-acb9-f145abad3137', 'data_vg': 'ceph-eb569be8-41bf-5aa1-acb9-f145abad3137'})  2026-03-08 00:43:07.628957 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:43:07.629057 | orchestrator | 2026-03-08 00:43:07.629073 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-08 00:43:07.629087 | orchestrator | Sunday 08 March 2026 00:43:01 +0000 (0:00:00.162) 0:00:47.155 ********** 2026-03-08 00:43:07.629100 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e9614fc2-8329-596c-937c-60ceb39d5fd3', 'data_vg': 'ceph-e9614fc2-8329-596c-937c-60ceb39d5fd3'})  2026-03-08 00:43:07.629114 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-eb569be8-41bf-5aa1-acb9-f145abad3137', 'data_vg': 'ceph-eb569be8-41bf-5aa1-acb9-f145abad3137'})  2026-03-08 00:43:07.629127 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:43:07.629140 | orchestrator | 2026-03-08 00:43:07.629151 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-08 00:43:07.629159 | orchestrator | Sunday 08 March 2026 00:43:01 +0000 (0:00:00.156) 0:00:47.312 ********** 2026-03-08 00:43:07.629166 | orchestrator | ok: [testbed-node-4] => { 2026-03-08 00:43:07.629173 | orchestrator |  "lvm_report": { 2026-03-08 00:43:07.629181 | orchestrator |  "lv": [ 2026-03-08 00:43:07.629188 | orchestrator |  { 2026-03-08 00:43:07.629195 | orchestrator |  "lv_name": "osd-block-e9614fc2-8329-596c-937c-60ceb39d5fd3", 2026-03-08 00:43:07.629203 | orchestrator |  "vg_name": "ceph-e9614fc2-8329-596c-937c-60ceb39d5fd3" 2026-03-08 00:43:07.629210 | orchestrator |  }, 2026-03-08 00:43:07.629217 | orchestrator |  { 2026-03-08 00:43:07.629268 | orchestrator |  "lv_name": "osd-block-eb569be8-41bf-5aa1-acb9-f145abad3137", 2026-03-08 00:43:07.629276 | orchestrator |  "vg_name": "ceph-eb569be8-41bf-5aa1-acb9-f145abad3137" 2026-03-08 00:43:07.629283 | orchestrator |  } 2026-03-08 00:43:07.629290 | orchestrator |  ], 2026-03-08 00:43:07.629297 | orchestrator |  "pv": [ 2026-03-08 00:43:07.629304 | orchestrator |  { 2026-03-08 00:43:07.629312 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-08 00:43:07.629323 | orchestrator |  "vg_name": "ceph-e9614fc2-8329-596c-937c-60ceb39d5fd3" 2026-03-08 00:43:07.629330 | orchestrator |  }, 2026-03-08 00:43:07.629338 | orchestrator |  { 2026-03-08 00:43:07.629345 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-08 00:43:07.629352 | orchestrator |  "vg_name": "ceph-eb569be8-41bf-5aa1-acb9-f145abad3137" 2026-03-08 00:43:07.629359 | orchestrator |  } 2026-03-08 00:43:07.629366 | orchestrator |  ] 2026-03-08 00:43:07.629373 | orchestrator |  } 2026-03-08 00:43:07.629381 | orchestrator | } 2026-03-08 00:43:07.629388 | orchestrator | 2026-03-08 00:43:07.629395 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-08 00:43:07.629403 | orchestrator | 2026-03-08 00:43:07.629410 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-08 00:43:07.629417 | orchestrator | Sunday 08 March 2026 00:43:02 +0000 (0:00:00.502) 0:00:47.814 ********** 2026-03-08 00:43:07.629425 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-08 00:43:07.629432 | orchestrator | 2026-03-08 00:43:07.629439 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-08 00:43:07.629447 | orchestrator | Sunday 08 March 2026 00:43:02 +0000 (0:00:00.255) 0:00:48.070 ********** 2026-03-08 00:43:07.629454 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:43:07.629461 | orchestrator | 2026-03-08 00:43:07.629468 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:43:07.629477 | orchestrator | Sunday 08 March 2026 00:43:02 +0000 (0:00:00.241) 0:00:48.311 ********** 2026-03-08 00:43:07.629486 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-03-08 00:43:07.629495 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-03-08 00:43:07.629503 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-03-08 00:43:07.629512 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-03-08 00:43:07.629526 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-03-08 00:43:07.629534 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-03-08 00:43:07.629542 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-03-08 00:43:07.629551 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-03-08 00:43:07.629559 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-03-08 00:43:07.629582 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-03-08 00:43:07.629591 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-03-08 00:43:07.629606 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-03-08 00:43:07.629614 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-03-08 00:43:07.629622 | orchestrator | 2026-03-08 00:43:07.629631 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:43:07.629640 | orchestrator | Sunday 08 March 2026 00:43:03 +0000 (0:00:00.401) 0:00:48.712 ********** 2026-03-08 00:43:07.629649 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:07.629657 | orchestrator | 2026-03-08 00:43:07.629666 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:43:07.629675 | orchestrator | Sunday 08 March 2026 00:43:03 +0000 (0:00:00.189) 0:00:48.901 ********** 2026-03-08 00:43:07.629683 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:07.629691 | orchestrator | 2026-03-08 00:43:07.629700 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:43:07.629722 | orchestrator | Sunday 08 March 2026 00:43:03 +0000 (0:00:00.201) 0:00:49.103 ********** 2026-03-08 00:43:07.629732 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:07.629740 | orchestrator | 2026-03-08 00:43:07.629749 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:43:07.629758 | orchestrator | Sunday 08 March 2026 00:43:03 +0000 (0:00:00.202) 0:00:49.306 ********** 2026-03-08 00:43:07.629766 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:07.629775 | orchestrator | 2026-03-08 00:43:07.629783 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:43:07.629792 | orchestrator | Sunday 08 March 2026 00:43:03 +0000 (0:00:00.196) 0:00:49.502 ********** 2026-03-08 00:43:07.629801 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:07.629810 | orchestrator | 2026-03-08 00:43:07.629819 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:43:07.629828 | orchestrator | Sunday 08 March 2026 00:43:04 +0000 (0:00:00.616) 0:00:50.119 ********** 2026-03-08 00:43:07.629837 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:07.629844 | orchestrator | 2026-03-08 00:43:07.629851 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:43:07.629858 | orchestrator | Sunday 08 March 2026 00:43:04 +0000 (0:00:00.192) 0:00:50.311 ********** 2026-03-08 00:43:07.629865 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:07.629873 | orchestrator | 2026-03-08 00:43:07.629880 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:43:07.629887 | orchestrator | Sunday 08 March 2026 00:43:05 +0000 (0:00:00.262) 0:00:50.573 ********** 2026-03-08 00:43:07.629894 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:07.629901 | orchestrator | 2026-03-08 00:43:07.629908 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:43:07.629916 | orchestrator | Sunday 08 March 2026 00:43:05 +0000 (0:00:00.208) 0:00:50.782 ********** 2026-03-08 00:43:07.629923 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_1404ed60-298a-412c-bd4f-1e90f35345d3) 2026-03-08 00:43:07.629935 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_1404ed60-298a-412c-bd4f-1e90f35345d3) 2026-03-08 00:43:07.629945 | orchestrator | 2026-03-08 00:43:07.629953 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:43:07.629960 | orchestrator | Sunday 08 March 2026 00:43:05 +0000 (0:00:00.421) 0:00:51.203 ********** 2026-03-08 00:43:07.629967 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_a9abd44a-efa3-4fc9-810c-e4cec7375a49) 2026-03-08 00:43:07.629978 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_a9abd44a-efa3-4fc9-810c-e4cec7375a49) 2026-03-08 00:43:07.629991 | orchestrator | 2026-03-08 00:43:07.630003 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:43:07.630070 | orchestrator | Sunday 08 March 2026 00:43:06 +0000 (0:00:00.413) 0:00:51.617 ********** 2026-03-08 00:43:07.630085 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_70953687-69fa-4056-8e35-7089ee1c64ea) 2026-03-08 00:43:07.630096 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_70953687-69fa-4056-8e35-7089ee1c64ea) 2026-03-08 00:43:07.630108 | orchestrator | 2026-03-08 00:43:07.630120 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:43:07.630134 | orchestrator | Sunday 08 March 2026 00:43:06 +0000 (0:00:00.424) 0:00:52.041 ********** 2026-03-08 00:43:07.630146 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_7bc88367-6aaf-4ded-8fa4-f9240096c464) 2026-03-08 00:43:07.630159 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_7bc88367-6aaf-4ded-8fa4-f9240096c464) 2026-03-08 00:43:07.630171 | orchestrator | 2026-03-08 00:43:07.630184 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-08 00:43:07.630196 | orchestrator | Sunday 08 March 2026 00:43:06 +0000 (0:00:00.434) 0:00:52.475 ********** 2026-03-08 00:43:07.630209 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-08 00:43:07.630238 | orchestrator | 2026-03-08 00:43:07.630251 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:43:07.630264 | orchestrator | Sunday 08 March 2026 00:43:07 +0000 (0:00:00.331) 0:00:52.807 ********** 2026-03-08 00:43:07.630277 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-03-08 00:43:07.630289 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-03-08 00:43:07.630302 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-03-08 00:43:07.630315 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-03-08 00:43:07.630328 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-03-08 00:43:07.630336 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-03-08 00:43:07.630343 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-03-08 00:43:07.630350 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-03-08 00:43:07.630358 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-03-08 00:43:07.630365 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-03-08 00:43:07.630372 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-03-08 00:43:07.630387 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-03-08 00:43:16.530572 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-03-08 00:43:16.530710 | orchestrator | 2026-03-08 00:43:16.530727 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:43:16.530740 | orchestrator | Sunday 08 March 2026 00:43:07 +0000 (0:00:00.409) 0:00:53.217 ********** 2026-03-08 00:43:16.530771 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:16.530784 | orchestrator | 2026-03-08 00:43:16.530795 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:43:16.530806 | orchestrator | Sunday 08 March 2026 00:43:07 +0000 (0:00:00.206) 0:00:53.424 ********** 2026-03-08 00:43:16.530817 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:16.530828 | orchestrator | 2026-03-08 00:43:16.530838 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:43:16.530849 | orchestrator | Sunday 08 March 2026 00:43:08 +0000 (0:00:00.647) 0:00:54.072 ********** 2026-03-08 00:43:16.530860 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:16.530871 | orchestrator | 2026-03-08 00:43:16.530882 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:43:16.530892 | orchestrator | Sunday 08 March 2026 00:43:08 +0000 (0:00:00.264) 0:00:54.337 ********** 2026-03-08 00:43:16.530903 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:16.530914 | orchestrator | 2026-03-08 00:43:16.530924 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:43:16.530935 | orchestrator | Sunday 08 March 2026 00:43:09 +0000 (0:00:00.219) 0:00:54.557 ********** 2026-03-08 00:43:16.530946 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:16.530957 | orchestrator | 2026-03-08 00:43:16.530967 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:43:16.530978 | orchestrator | Sunday 08 March 2026 00:43:09 +0000 (0:00:00.207) 0:00:54.764 ********** 2026-03-08 00:43:16.530989 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:16.530999 | orchestrator | 2026-03-08 00:43:16.531025 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:43:16.531037 | orchestrator | Sunday 08 March 2026 00:43:09 +0000 (0:00:00.242) 0:00:55.007 ********** 2026-03-08 00:43:16.531047 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:16.531058 | orchestrator | 2026-03-08 00:43:16.531069 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:43:16.531079 | orchestrator | Sunday 08 March 2026 00:43:09 +0000 (0:00:00.204) 0:00:55.212 ********** 2026-03-08 00:43:16.531092 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:16.531105 | orchestrator | 2026-03-08 00:43:16.531118 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:43:16.531131 | orchestrator | Sunday 08 March 2026 00:43:09 +0000 (0:00:00.181) 0:00:55.393 ********** 2026-03-08 00:43:16.531144 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-03-08 00:43:16.531157 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-03-08 00:43:16.531175 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-03-08 00:43:16.531194 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-03-08 00:43:16.531210 | orchestrator | 2026-03-08 00:43:16.531312 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:43:16.531331 | orchestrator | Sunday 08 March 2026 00:43:10 +0000 (0:00:00.614) 0:00:56.008 ********** 2026-03-08 00:43:16.531344 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:16.531357 | orchestrator | 2026-03-08 00:43:16.531370 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:43:16.531382 | orchestrator | Sunday 08 March 2026 00:43:10 +0000 (0:00:00.179) 0:00:56.188 ********** 2026-03-08 00:43:16.531395 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:16.531407 | orchestrator | 2026-03-08 00:43:16.531419 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:43:16.531432 | orchestrator | Sunday 08 March 2026 00:43:10 +0000 (0:00:00.173) 0:00:56.361 ********** 2026-03-08 00:43:16.531445 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:16.531457 | orchestrator | 2026-03-08 00:43:16.531467 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-08 00:43:16.531478 | orchestrator | Sunday 08 March 2026 00:43:11 +0000 (0:00:00.184) 0:00:56.545 ********** 2026-03-08 00:43:16.531499 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:16.531509 | orchestrator | 2026-03-08 00:43:16.531520 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-08 00:43:16.531530 | orchestrator | Sunday 08 March 2026 00:43:11 +0000 (0:00:00.195) 0:00:56.740 ********** 2026-03-08 00:43:16.531541 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:16.531552 | orchestrator | 2026-03-08 00:43:16.531562 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-08 00:43:16.531573 | orchestrator | Sunday 08 March 2026 00:43:11 +0000 (0:00:00.310) 0:00:57.051 ********** 2026-03-08 00:43:16.531584 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '5bde4b8d-c924-5d1f-8c9a-71f523250ead'}}) 2026-03-08 00:43:16.531595 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ad275011-1eda-59d8-b818-a96e3c140717'}}) 2026-03-08 00:43:16.531606 | orchestrator | 2026-03-08 00:43:16.531616 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-08 00:43:16.531627 | orchestrator | Sunday 08 March 2026 00:43:11 +0000 (0:00:00.207) 0:00:57.258 ********** 2026-03-08 00:43:16.531640 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-5bde4b8d-c924-5d1f-8c9a-71f523250ead', 'data_vg': 'ceph-5bde4b8d-c924-5d1f-8c9a-71f523250ead'}) 2026-03-08 00:43:16.531661 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-ad275011-1eda-59d8-b818-a96e3c140717', 'data_vg': 'ceph-ad275011-1eda-59d8-b818-a96e3c140717'}) 2026-03-08 00:43:16.531679 | orchestrator | 2026-03-08 00:43:16.531697 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-08 00:43:16.531740 | orchestrator | Sunday 08 March 2026 00:43:13 +0000 (0:00:01.900) 0:00:59.158 ********** 2026-03-08 00:43:16.531761 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5bde4b8d-c924-5d1f-8c9a-71f523250ead', 'data_vg': 'ceph-5bde4b8d-c924-5d1f-8c9a-71f523250ead'})  2026-03-08 00:43:16.531780 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ad275011-1eda-59d8-b818-a96e3c140717', 'data_vg': 'ceph-ad275011-1eda-59d8-b818-a96e3c140717'})  2026-03-08 00:43:16.531798 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:16.531817 | orchestrator | 2026-03-08 00:43:16.531836 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-08 00:43:16.531856 | orchestrator | Sunday 08 March 2026 00:43:13 +0000 (0:00:00.159) 0:00:59.318 ********** 2026-03-08 00:43:16.531875 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-5bde4b8d-c924-5d1f-8c9a-71f523250ead', 'data_vg': 'ceph-5bde4b8d-c924-5d1f-8c9a-71f523250ead'}) 2026-03-08 00:43:16.531894 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-ad275011-1eda-59d8-b818-a96e3c140717', 'data_vg': 'ceph-ad275011-1eda-59d8-b818-a96e3c140717'}) 2026-03-08 00:43:16.531906 | orchestrator | 2026-03-08 00:43:16.531917 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-08 00:43:16.531927 | orchestrator | Sunday 08 March 2026 00:43:15 +0000 (0:00:01.333) 0:01:00.651 ********** 2026-03-08 00:43:16.531938 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5bde4b8d-c924-5d1f-8c9a-71f523250ead', 'data_vg': 'ceph-5bde4b8d-c924-5d1f-8c9a-71f523250ead'})  2026-03-08 00:43:16.531949 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ad275011-1eda-59d8-b818-a96e3c140717', 'data_vg': 'ceph-ad275011-1eda-59d8-b818-a96e3c140717'})  2026-03-08 00:43:16.531959 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:16.531970 | orchestrator | 2026-03-08 00:43:16.531981 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-08 00:43:16.531992 | orchestrator | Sunday 08 March 2026 00:43:15 +0000 (0:00:00.149) 0:01:00.801 ********** 2026-03-08 00:43:16.532003 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:16.532013 | orchestrator | 2026-03-08 00:43:16.532024 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-08 00:43:16.532035 | orchestrator | Sunday 08 March 2026 00:43:15 +0000 (0:00:00.128) 0:01:00.929 ********** 2026-03-08 00:43:16.532055 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5bde4b8d-c924-5d1f-8c9a-71f523250ead', 'data_vg': 'ceph-5bde4b8d-c924-5d1f-8c9a-71f523250ead'})  2026-03-08 00:43:16.532066 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ad275011-1eda-59d8-b818-a96e3c140717', 'data_vg': 'ceph-ad275011-1eda-59d8-b818-a96e3c140717'})  2026-03-08 00:43:16.532077 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:16.532087 | orchestrator | 2026-03-08 00:43:16.532098 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-08 00:43:16.532109 | orchestrator | Sunday 08 March 2026 00:43:15 +0000 (0:00:00.148) 0:01:01.078 ********** 2026-03-08 00:43:16.532119 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:16.532130 | orchestrator | 2026-03-08 00:43:16.532141 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-08 00:43:16.532152 | orchestrator | Sunday 08 March 2026 00:43:15 +0000 (0:00:00.126) 0:01:01.204 ********** 2026-03-08 00:43:16.532162 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5bde4b8d-c924-5d1f-8c9a-71f523250ead', 'data_vg': 'ceph-5bde4b8d-c924-5d1f-8c9a-71f523250ead'})  2026-03-08 00:43:16.532173 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ad275011-1eda-59d8-b818-a96e3c140717', 'data_vg': 'ceph-ad275011-1eda-59d8-b818-a96e3c140717'})  2026-03-08 00:43:16.532184 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:16.532195 | orchestrator | 2026-03-08 00:43:16.532205 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-08 00:43:16.532249 | orchestrator | Sunday 08 March 2026 00:43:15 +0000 (0:00:00.142) 0:01:01.346 ********** 2026-03-08 00:43:16.532261 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:16.532272 | orchestrator | 2026-03-08 00:43:16.532283 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-08 00:43:16.532293 | orchestrator | Sunday 08 March 2026 00:43:15 +0000 (0:00:00.147) 0:01:01.494 ********** 2026-03-08 00:43:16.532304 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5bde4b8d-c924-5d1f-8c9a-71f523250ead', 'data_vg': 'ceph-5bde4b8d-c924-5d1f-8c9a-71f523250ead'})  2026-03-08 00:43:16.532315 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ad275011-1eda-59d8-b818-a96e3c140717', 'data_vg': 'ceph-ad275011-1eda-59d8-b818-a96e3c140717'})  2026-03-08 00:43:16.532326 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:16.532336 | orchestrator | 2026-03-08 00:43:16.532347 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-08 00:43:16.532358 | orchestrator | Sunday 08 March 2026 00:43:16 +0000 (0:00:00.148) 0:01:01.643 ********** 2026-03-08 00:43:16.532369 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:43:16.532380 | orchestrator | 2026-03-08 00:43:16.532390 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-08 00:43:16.532401 | orchestrator | Sunday 08 March 2026 00:43:16 +0000 (0:00:00.325) 0:01:01.968 ********** 2026-03-08 00:43:16.532421 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5bde4b8d-c924-5d1f-8c9a-71f523250ead', 'data_vg': 'ceph-5bde4b8d-c924-5d1f-8c9a-71f523250ead'})  2026-03-08 00:43:22.561140 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ad275011-1eda-59d8-b818-a96e3c140717', 'data_vg': 'ceph-ad275011-1eda-59d8-b818-a96e3c140717'})  2026-03-08 00:43:22.561267 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:22.561284 | orchestrator | 2026-03-08 00:43:22.561301 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-08 00:43:22.561313 | orchestrator | Sunday 08 March 2026 00:43:16 +0000 (0:00:00.156) 0:01:02.124 ********** 2026-03-08 00:43:22.561322 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5bde4b8d-c924-5d1f-8c9a-71f523250ead', 'data_vg': 'ceph-5bde4b8d-c924-5d1f-8c9a-71f523250ead'})  2026-03-08 00:43:22.561331 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ad275011-1eda-59d8-b818-a96e3c140717', 'data_vg': 'ceph-ad275011-1eda-59d8-b818-a96e3c140717'})  2026-03-08 00:43:22.561358 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:22.561368 | orchestrator | 2026-03-08 00:43:22.561377 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-08 00:43:22.561385 | orchestrator | Sunday 08 March 2026 00:43:16 +0000 (0:00:00.157) 0:01:02.281 ********** 2026-03-08 00:43:22.561394 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5bde4b8d-c924-5d1f-8c9a-71f523250ead', 'data_vg': 'ceph-5bde4b8d-c924-5d1f-8c9a-71f523250ead'})  2026-03-08 00:43:22.561403 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ad275011-1eda-59d8-b818-a96e3c140717', 'data_vg': 'ceph-ad275011-1eda-59d8-b818-a96e3c140717'})  2026-03-08 00:43:22.561411 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:22.561420 | orchestrator | 2026-03-08 00:43:22.561429 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-08 00:43:22.561449 | orchestrator | Sunday 08 March 2026 00:43:16 +0000 (0:00:00.153) 0:01:02.435 ********** 2026-03-08 00:43:22.561458 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:22.561467 | orchestrator | 2026-03-08 00:43:22.561475 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-08 00:43:22.561484 | orchestrator | Sunday 08 March 2026 00:43:17 +0000 (0:00:00.157) 0:01:02.593 ********** 2026-03-08 00:43:22.561492 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:22.561501 | orchestrator | 2026-03-08 00:43:22.561509 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-08 00:43:22.561518 | orchestrator | Sunday 08 March 2026 00:43:17 +0000 (0:00:00.131) 0:01:02.724 ********** 2026-03-08 00:43:22.561526 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:22.561534 | orchestrator | 2026-03-08 00:43:22.561543 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-08 00:43:22.561551 | orchestrator | Sunday 08 March 2026 00:43:17 +0000 (0:00:00.126) 0:01:02.851 ********** 2026-03-08 00:43:22.561560 | orchestrator | ok: [testbed-node-5] => { 2026-03-08 00:43:22.561569 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-08 00:43:22.561578 | orchestrator | } 2026-03-08 00:43:22.561587 | orchestrator | 2026-03-08 00:43:22.561596 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-08 00:43:22.561604 | orchestrator | Sunday 08 March 2026 00:43:17 +0000 (0:00:00.144) 0:01:02.995 ********** 2026-03-08 00:43:22.561613 | orchestrator | ok: [testbed-node-5] => { 2026-03-08 00:43:22.561621 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-08 00:43:22.561630 | orchestrator | } 2026-03-08 00:43:22.561638 | orchestrator | 2026-03-08 00:43:22.561647 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-08 00:43:22.561655 | orchestrator | Sunday 08 March 2026 00:43:17 +0000 (0:00:00.150) 0:01:03.146 ********** 2026-03-08 00:43:22.561664 | orchestrator | ok: [testbed-node-5] => { 2026-03-08 00:43:22.561672 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-08 00:43:22.561681 | orchestrator | } 2026-03-08 00:43:22.561692 | orchestrator | 2026-03-08 00:43:22.561702 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-08 00:43:22.561713 | orchestrator | Sunday 08 March 2026 00:43:17 +0000 (0:00:00.140) 0:01:03.287 ********** 2026-03-08 00:43:22.561723 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:43:22.561733 | orchestrator | 2026-03-08 00:43:22.561743 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-08 00:43:22.561753 | orchestrator | Sunday 08 March 2026 00:43:18 +0000 (0:00:00.625) 0:01:03.913 ********** 2026-03-08 00:43:22.561774 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:43:22.561793 | orchestrator | 2026-03-08 00:43:22.561803 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-08 00:43:22.561813 | orchestrator | Sunday 08 March 2026 00:43:18 +0000 (0:00:00.510) 0:01:04.423 ********** 2026-03-08 00:43:22.561823 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:43:22.561840 | orchestrator | 2026-03-08 00:43:22.561850 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-08 00:43:22.561859 | orchestrator | Sunday 08 March 2026 00:43:19 +0000 (0:00:00.693) 0:01:05.117 ********** 2026-03-08 00:43:22.561870 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:43:22.561880 | orchestrator | 2026-03-08 00:43:22.561890 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-08 00:43:22.561900 | orchestrator | Sunday 08 March 2026 00:43:19 +0000 (0:00:00.152) 0:01:05.269 ********** 2026-03-08 00:43:22.561910 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:22.561920 | orchestrator | 2026-03-08 00:43:22.561930 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-08 00:43:22.561940 | orchestrator | Sunday 08 March 2026 00:43:19 +0000 (0:00:00.108) 0:01:05.377 ********** 2026-03-08 00:43:22.561950 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:22.561960 | orchestrator | 2026-03-08 00:43:22.561971 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-08 00:43:22.561981 | orchestrator | Sunday 08 March 2026 00:43:19 +0000 (0:00:00.110) 0:01:05.488 ********** 2026-03-08 00:43:22.561990 | orchestrator | ok: [testbed-node-5] => { 2026-03-08 00:43:22.562001 | orchestrator |  "vgs_report": { 2026-03-08 00:43:22.562012 | orchestrator |  "vg": [] 2026-03-08 00:43:22.562128 | orchestrator |  } 2026-03-08 00:43:22.562140 | orchestrator | } 2026-03-08 00:43:22.562150 | orchestrator | 2026-03-08 00:43:22.562159 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-08 00:43:22.562168 | orchestrator | Sunday 08 March 2026 00:43:20 +0000 (0:00:00.148) 0:01:05.637 ********** 2026-03-08 00:43:22.562176 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:22.562185 | orchestrator | 2026-03-08 00:43:22.562193 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-08 00:43:22.562201 | orchestrator | Sunday 08 March 2026 00:43:20 +0000 (0:00:00.152) 0:01:05.789 ********** 2026-03-08 00:43:22.562238 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:22.562248 | orchestrator | 2026-03-08 00:43:22.562256 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-08 00:43:22.562265 | orchestrator | Sunday 08 March 2026 00:43:20 +0000 (0:00:00.135) 0:01:05.925 ********** 2026-03-08 00:43:22.562273 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:22.562282 | orchestrator | 2026-03-08 00:43:22.562290 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-08 00:43:22.562299 | orchestrator | Sunday 08 March 2026 00:43:20 +0000 (0:00:00.127) 0:01:06.053 ********** 2026-03-08 00:43:22.562307 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:22.562316 | orchestrator | 2026-03-08 00:43:22.562325 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-08 00:43:22.562333 | orchestrator | Sunday 08 March 2026 00:43:20 +0000 (0:00:00.136) 0:01:06.189 ********** 2026-03-08 00:43:22.562342 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:22.562350 | orchestrator | 2026-03-08 00:43:22.562358 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-08 00:43:22.562367 | orchestrator | Sunday 08 March 2026 00:43:20 +0000 (0:00:00.134) 0:01:06.324 ********** 2026-03-08 00:43:22.562375 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:22.562384 | orchestrator | 2026-03-08 00:43:22.562392 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-08 00:43:22.562406 | orchestrator | Sunday 08 March 2026 00:43:20 +0000 (0:00:00.136) 0:01:06.461 ********** 2026-03-08 00:43:22.562415 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:22.562423 | orchestrator | 2026-03-08 00:43:22.562432 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-08 00:43:22.562440 | orchestrator | Sunday 08 March 2026 00:43:21 +0000 (0:00:00.151) 0:01:06.612 ********** 2026-03-08 00:43:22.562449 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:22.562457 | orchestrator | 2026-03-08 00:43:22.562466 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-08 00:43:22.562481 | orchestrator | Sunday 08 March 2026 00:43:21 +0000 (0:00:00.303) 0:01:06.916 ********** 2026-03-08 00:43:22.562490 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:22.562498 | orchestrator | 2026-03-08 00:43:22.562507 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-08 00:43:22.562516 | orchestrator | Sunday 08 March 2026 00:43:21 +0000 (0:00:00.135) 0:01:07.051 ********** 2026-03-08 00:43:22.562524 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:22.562533 | orchestrator | 2026-03-08 00:43:22.562541 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-08 00:43:22.562550 | orchestrator | Sunday 08 March 2026 00:43:21 +0000 (0:00:00.137) 0:01:07.189 ********** 2026-03-08 00:43:22.562559 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:22.562567 | orchestrator | 2026-03-08 00:43:22.562576 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-08 00:43:22.562584 | orchestrator | Sunday 08 March 2026 00:43:21 +0000 (0:00:00.140) 0:01:07.329 ********** 2026-03-08 00:43:22.562593 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:22.562601 | orchestrator | 2026-03-08 00:43:22.562610 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-08 00:43:22.562618 | orchestrator | Sunday 08 March 2026 00:43:21 +0000 (0:00:00.126) 0:01:07.456 ********** 2026-03-08 00:43:22.562627 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:22.562635 | orchestrator | 2026-03-08 00:43:22.562644 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-08 00:43:22.562652 | orchestrator | Sunday 08 March 2026 00:43:22 +0000 (0:00:00.126) 0:01:07.583 ********** 2026-03-08 00:43:22.562660 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:22.562669 | orchestrator | 2026-03-08 00:43:22.562678 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-08 00:43:22.562686 | orchestrator | Sunday 08 March 2026 00:43:22 +0000 (0:00:00.122) 0:01:07.705 ********** 2026-03-08 00:43:22.562695 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5bde4b8d-c924-5d1f-8c9a-71f523250ead', 'data_vg': 'ceph-5bde4b8d-c924-5d1f-8c9a-71f523250ead'})  2026-03-08 00:43:22.562704 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ad275011-1eda-59d8-b818-a96e3c140717', 'data_vg': 'ceph-ad275011-1eda-59d8-b818-a96e3c140717'})  2026-03-08 00:43:22.562712 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:22.562721 | orchestrator | 2026-03-08 00:43:22.562729 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-08 00:43:22.562738 | orchestrator | Sunday 08 March 2026 00:43:22 +0000 (0:00:00.155) 0:01:07.861 ********** 2026-03-08 00:43:22.562746 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5bde4b8d-c924-5d1f-8c9a-71f523250ead', 'data_vg': 'ceph-5bde4b8d-c924-5d1f-8c9a-71f523250ead'})  2026-03-08 00:43:22.562755 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ad275011-1eda-59d8-b818-a96e3c140717', 'data_vg': 'ceph-ad275011-1eda-59d8-b818-a96e3c140717'})  2026-03-08 00:43:22.562764 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:22.562772 | orchestrator | 2026-03-08 00:43:22.562781 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-08 00:43:22.562789 | orchestrator | Sunday 08 March 2026 00:43:22 +0000 (0:00:00.138) 0:01:07.999 ********** 2026-03-08 00:43:22.562804 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5bde4b8d-c924-5d1f-8c9a-71f523250ead', 'data_vg': 'ceph-5bde4b8d-c924-5d1f-8c9a-71f523250ead'})  2026-03-08 00:43:25.730289 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ad275011-1eda-59d8-b818-a96e3c140717', 'data_vg': 'ceph-ad275011-1eda-59d8-b818-a96e3c140717'})  2026-03-08 00:43:25.730390 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:25.730406 | orchestrator | 2026-03-08 00:43:25.730413 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-08 00:43:25.730421 | orchestrator | Sunday 08 March 2026 00:43:22 +0000 (0:00:00.162) 0:01:08.162 ********** 2026-03-08 00:43:25.730448 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5bde4b8d-c924-5d1f-8c9a-71f523250ead', 'data_vg': 'ceph-5bde4b8d-c924-5d1f-8c9a-71f523250ead'})  2026-03-08 00:43:25.730454 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ad275011-1eda-59d8-b818-a96e3c140717', 'data_vg': 'ceph-ad275011-1eda-59d8-b818-a96e3c140717'})  2026-03-08 00:43:25.730460 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:25.730466 | orchestrator | 2026-03-08 00:43:25.730472 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-08 00:43:25.730478 | orchestrator | Sunday 08 March 2026 00:43:22 +0000 (0:00:00.155) 0:01:08.318 ********** 2026-03-08 00:43:25.730484 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5bde4b8d-c924-5d1f-8c9a-71f523250ead', 'data_vg': 'ceph-5bde4b8d-c924-5d1f-8c9a-71f523250ead'})  2026-03-08 00:43:25.730502 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ad275011-1eda-59d8-b818-a96e3c140717', 'data_vg': 'ceph-ad275011-1eda-59d8-b818-a96e3c140717'})  2026-03-08 00:43:25.730508 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:25.730514 | orchestrator | 2026-03-08 00:43:25.730520 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-08 00:43:25.730526 | orchestrator | Sunday 08 March 2026 00:43:22 +0000 (0:00:00.153) 0:01:08.472 ********** 2026-03-08 00:43:25.730531 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5bde4b8d-c924-5d1f-8c9a-71f523250ead', 'data_vg': 'ceph-5bde4b8d-c924-5d1f-8c9a-71f523250ead'})  2026-03-08 00:43:25.730538 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ad275011-1eda-59d8-b818-a96e3c140717', 'data_vg': 'ceph-ad275011-1eda-59d8-b818-a96e3c140717'})  2026-03-08 00:43:25.730547 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:25.730556 | orchestrator | 2026-03-08 00:43:25.730575 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-08 00:43:25.730584 | orchestrator | Sunday 08 March 2026 00:43:23 +0000 (0:00:00.338) 0:01:08.811 ********** 2026-03-08 00:43:25.730593 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5bde4b8d-c924-5d1f-8c9a-71f523250ead', 'data_vg': 'ceph-5bde4b8d-c924-5d1f-8c9a-71f523250ead'})  2026-03-08 00:43:25.730601 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ad275011-1eda-59d8-b818-a96e3c140717', 'data_vg': 'ceph-ad275011-1eda-59d8-b818-a96e3c140717'})  2026-03-08 00:43:25.730610 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:25.730619 | orchestrator | 2026-03-08 00:43:25.730628 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-08 00:43:25.730636 | orchestrator | Sunday 08 March 2026 00:43:23 +0000 (0:00:00.146) 0:01:08.958 ********** 2026-03-08 00:43:25.730645 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5bde4b8d-c924-5d1f-8c9a-71f523250ead', 'data_vg': 'ceph-5bde4b8d-c924-5d1f-8c9a-71f523250ead'})  2026-03-08 00:43:25.730654 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ad275011-1eda-59d8-b818-a96e3c140717', 'data_vg': 'ceph-ad275011-1eda-59d8-b818-a96e3c140717'})  2026-03-08 00:43:25.730663 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:25.730673 | orchestrator | 2026-03-08 00:43:25.730682 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-08 00:43:25.730691 | orchestrator | Sunday 08 March 2026 00:43:23 +0000 (0:00:00.151) 0:01:09.109 ********** 2026-03-08 00:43:25.730700 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:43:25.730712 | orchestrator | 2026-03-08 00:43:25.730721 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-08 00:43:25.730731 | orchestrator | Sunday 08 March 2026 00:43:24 +0000 (0:00:00.513) 0:01:09.623 ********** 2026-03-08 00:43:25.730741 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:43:25.730751 | orchestrator | 2026-03-08 00:43:25.730761 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-08 00:43:25.730779 | orchestrator | Sunday 08 March 2026 00:43:24 +0000 (0:00:00.552) 0:01:10.176 ********** 2026-03-08 00:43:25.730784 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:43:25.730790 | orchestrator | 2026-03-08 00:43:25.730796 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-08 00:43:25.730801 | orchestrator | Sunday 08 March 2026 00:43:24 +0000 (0:00:00.160) 0:01:10.336 ********** 2026-03-08 00:43:25.730807 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-5bde4b8d-c924-5d1f-8c9a-71f523250ead', 'vg_name': 'ceph-5bde4b8d-c924-5d1f-8c9a-71f523250ead'}) 2026-03-08 00:43:25.730814 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-ad275011-1eda-59d8-b818-a96e3c140717', 'vg_name': 'ceph-ad275011-1eda-59d8-b818-a96e3c140717'}) 2026-03-08 00:43:25.730820 | orchestrator | 2026-03-08 00:43:25.730826 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-08 00:43:25.730832 | orchestrator | Sunday 08 March 2026 00:43:25 +0000 (0:00:00.189) 0:01:10.526 ********** 2026-03-08 00:43:25.730853 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5bde4b8d-c924-5d1f-8c9a-71f523250ead', 'data_vg': 'ceph-5bde4b8d-c924-5d1f-8c9a-71f523250ead'})  2026-03-08 00:43:25.730859 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ad275011-1eda-59d8-b818-a96e3c140717', 'data_vg': 'ceph-ad275011-1eda-59d8-b818-a96e3c140717'})  2026-03-08 00:43:25.730865 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:25.730871 | orchestrator | 2026-03-08 00:43:25.730877 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-08 00:43:25.730882 | orchestrator | Sunday 08 March 2026 00:43:25 +0000 (0:00:00.186) 0:01:10.713 ********** 2026-03-08 00:43:25.730888 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5bde4b8d-c924-5d1f-8c9a-71f523250ead', 'data_vg': 'ceph-5bde4b8d-c924-5d1f-8c9a-71f523250ead'})  2026-03-08 00:43:25.730894 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ad275011-1eda-59d8-b818-a96e3c140717', 'data_vg': 'ceph-ad275011-1eda-59d8-b818-a96e3c140717'})  2026-03-08 00:43:25.730899 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:25.730905 | orchestrator | 2026-03-08 00:43:25.730911 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-08 00:43:25.730916 | orchestrator | Sunday 08 March 2026 00:43:25 +0000 (0:00:00.171) 0:01:10.885 ********** 2026-03-08 00:43:25.730922 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5bde4b8d-c924-5d1f-8c9a-71f523250ead', 'data_vg': 'ceph-5bde4b8d-c924-5d1f-8c9a-71f523250ead'})  2026-03-08 00:43:25.730928 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ad275011-1eda-59d8-b818-a96e3c140717', 'data_vg': 'ceph-ad275011-1eda-59d8-b818-a96e3c140717'})  2026-03-08 00:43:25.730934 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:25.730939 | orchestrator | 2026-03-08 00:43:25.730945 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-08 00:43:25.730951 | orchestrator | Sunday 08 March 2026 00:43:25 +0000 (0:00:00.187) 0:01:11.072 ********** 2026-03-08 00:43:25.730968 | orchestrator | ok: [testbed-node-5] => { 2026-03-08 00:43:25.730973 | orchestrator |  "lvm_report": { 2026-03-08 00:43:25.730987 | orchestrator |  "lv": [ 2026-03-08 00:43:25.730993 | orchestrator |  { 2026-03-08 00:43:25.730999 | orchestrator |  "lv_name": "osd-block-5bde4b8d-c924-5d1f-8c9a-71f523250ead", 2026-03-08 00:43:25.731006 | orchestrator |  "vg_name": "ceph-5bde4b8d-c924-5d1f-8c9a-71f523250ead" 2026-03-08 00:43:25.731011 | orchestrator |  }, 2026-03-08 00:43:25.731017 | orchestrator |  { 2026-03-08 00:43:25.731023 | orchestrator |  "lv_name": "osd-block-ad275011-1eda-59d8-b818-a96e3c140717", 2026-03-08 00:43:25.731028 | orchestrator |  "vg_name": "ceph-ad275011-1eda-59d8-b818-a96e3c140717" 2026-03-08 00:43:25.731034 | orchestrator |  } 2026-03-08 00:43:25.731040 | orchestrator |  ], 2026-03-08 00:43:25.731045 | orchestrator |  "pv": [ 2026-03-08 00:43:25.731055 | orchestrator |  { 2026-03-08 00:43:25.731061 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-08 00:43:25.731067 | orchestrator |  "vg_name": "ceph-5bde4b8d-c924-5d1f-8c9a-71f523250ead" 2026-03-08 00:43:25.731072 | orchestrator |  }, 2026-03-08 00:43:25.731078 | orchestrator |  { 2026-03-08 00:43:25.731084 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-08 00:43:25.731089 | orchestrator |  "vg_name": "ceph-ad275011-1eda-59d8-b818-a96e3c140717" 2026-03-08 00:43:25.731095 | orchestrator |  } 2026-03-08 00:43:25.731100 | orchestrator |  ] 2026-03-08 00:43:25.731106 | orchestrator |  } 2026-03-08 00:43:25.731112 | orchestrator | } 2026-03-08 00:43:25.731117 | orchestrator | 2026-03-08 00:43:25.731123 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:43:25.731129 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-08 00:43:25.731135 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-08 00:43:25.731140 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-08 00:43:25.731146 | orchestrator | 2026-03-08 00:43:25.731152 | orchestrator | 2026-03-08 00:43:25.731157 | orchestrator | 2026-03-08 00:43:25.731163 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 00:43:25.731169 | orchestrator | Sunday 08 March 2026 00:43:25 +0000 (0:00:00.153) 0:01:11.226 ********** 2026-03-08 00:43:25.731174 | orchestrator | =============================================================================== 2026-03-08 00:43:25.731180 | orchestrator | Create block VGs -------------------------------------------------------- 5.81s 2026-03-08 00:43:25.731186 | orchestrator | Create block LVs -------------------------------------------------------- 4.10s 2026-03-08 00:43:25.731192 | orchestrator | Add known partitions to the list of available block devices ------------- 1.84s 2026-03-08 00:43:25.731197 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.78s 2026-03-08 00:43:25.731225 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.71s 2026-03-08 00:43:25.731232 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.69s 2026-03-08 00:43:25.731238 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.66s 2026-03-08 00:43:25.731244 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.58s 2026-03-08 00:43:25.731254 | orchestrator | Add known links to the list of available block devices ------------------ 1.24s 2026-03-08 00:43:26.125820 | orchestrator | Print LVM report data --------------------------------------------------- 0.91s 2026-03-08 00:43:26.125909 | orchestrator | Add known partitions to the list of available block devices ------------- 0.85s 2026-03-08 00:43:26.125921 | orchestrator | Add known partitions to the list of available block devices ------------- 0.84s 2026-03-08 00:43:26.125930 | orchestrator | Add known links to the list of available block devices ------------------ 0.81s 2026-03-08 00:43:26.125939 | orchestrator | Create WAL LVs for ceph_wal_devices ------------------------------------- 0.79s 2026-03-08 00:43:26.125953 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.75s 2026-03-08 00:43:26.125968 | orchestrator | Add known links to the list of available block devices ------------------ 0.67s 2026-03-08 00:43:26.125982 | orchestrator | Get initial list of available block devices ----------------------------- 0.65s 2026-03-08 00:43:26.125997 | orchestrator | Add known partitions to the list of available block devices ------------- 0.65s 2026-03-08 00:43:26.126011 | orchestrator | Print 'Create WAL VGs' -------------------------------------------------- 0.64s 2026-03-08 00:43:26.126068 | orchestrator | Create DB LVs for ceph_db_devices --------------------------------------- 0.64s 2026-03-08 00:43:39.399553 | orchestrator | 2026-03-08 00:43:39 | INFO  | Prepare task for execution of facts. 2026-03-08 00:43:39.478306 | orchestrator | 2026-03-08 00:43:39 | INFO  | Task 2c5e2261-b37d-4b20-9c34-7c430f5be811 (facts) was prepared for execution. 2026-03-08 00:43:39.478415 | orchestrator | 2026-03-08 00:43:39 | INFO  | It takes a moment until task 2c5e2261-b37d-4b20-9c34-7c430f5be811 (facts) has been started and output is visible here. 2026-03-08 00:43:51.370669 | orchestrator | 2026-03-08 00:43:51.370805 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-08 00:43:51.370822 | orchestrator | 2026-03-08 00:43:51.370835 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-08 00:43:51.370846 | orchestrator | Sunday 08 March 2026 00:43:43 +0000 (0:00:00.221) 0:00:00.221 ********** 2026-03-08 00:43:51.370858 | orchestrator | ok: [testbed-manager] 2026-03-08 00:43:51.370870 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:43:51.370881 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:43:51.370893 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:43:51.370904 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:43:51.370914 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:43:51.370925 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:43:51.370936 | orchestrator | 2026-03-08 00:43:51.370947 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-08 00:43:51.370958 | orchestrator | Sunday 08 March 2026 00:43:44 +0000 (0:00:01.020) 0:00:01.242 ********** 2026-03-08 00:43:51.370969 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:43:51.370980 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:43:51.370991 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:43:51.371002 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:43:51.371012 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:43:51.371023 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:43:51.371034 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:51.371044 | orchestrator | 2026-03-08 00:43:51.371055 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-08 00:43:51.371066 | orchestrator | 2026-03-08 00:43:51.371077 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-08 00:43:51.371088 | orchestrator | Sunday 08 March 2026 00:43:45 +0000 (0:00:01.047) 0:00:02.290 ********** 2026-03-08 00:43:51.371098 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:43:51.371109 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:43:51.371120 | orchestrator | ok: [testbed-manager] 2026-03-08 00:43:51.371131 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:43:51.371141 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:43:51.371152 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:43:51.371163 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:43:51.371173 | orchestrator | 2026-03-08 00:43:51.371219 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-08 00:43:51.371241 | orchestrator | 2026-03-08 00:43:51.371261 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-08 00:43:51.371286 | orchestrator | Sunday 08 March 2026 00:43:50 +0000 (0:00:04.835) 0:00:07.125 ********** 2026-03-08 00:43:51.371314 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:43:51.371333 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:43:51.371351 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:43:51.371391 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:43:51.371409 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:43:51.371426 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:43:51.371445 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:43:51.371463 | orchestrator | 2026-03-08 00:43:51.371484 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:43:51.371504 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-08 00:43:51.371525 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-08 00:43:51.371579 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-08 00:43:51.371603 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-08 00:43:51.371624 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-08 00:43:51.371644 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-08 00:43:51.371662 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-08 00:43:51.371680 | orchestrator | 2026-03-08 00:43:51.371699 | orchestrator | 2026-03-08 00:43:51.371717 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 00:43:51.371736 | orchestrator | Sunday 08 March 2026 00:43:51 +0000 (0:00:00.506) 0:00:07.631 ********** 2026-03-08 00:43:51.371754 | orchestrator | =============================================================================== 2026-03-08 00:43:51.371773 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.84s 2026-03-08 00:43:51.371784 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.05s 2026-03-08 00:43:51.371795 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.02s 2026-03-08 00:43:51.371806 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.51s 2026-03-08 00:44:03.663014 | orchestrator | 2026-03-08 00:44:03 | INFO  | Prepare task for execution of frr. 2026-03-08 00:44:03.731116 | orchestrator | 2026-03-08 00:44:03 | INFO  | Task 1f4c7294-4253-420f-bf7d-7a8f99e8ad97 (frr) was prepared for execution. 2026-03-08 00:44:03.731311 | orchestrator | 2026-03-08 00:44:03 | INFO  | It takes a moment until task 1f4c7294-4253-420f-bf7d-7a8f99e8ad97 (frr) has been started and output is visible here. 2026-03-08 00:44:28.618778 | orchestrator | 2026-03-08 00:44:28.618895 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-03-08 00:44:28.618909 | orchestrator | 2026-03-08 00:44:28.618918 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-03-08 00:44:28.618927 | orchestrator | Sunday 08 March 2026 00:44:07 +0000 (0:00:00.178) 0:00:00.178 ********** 2026-03-08 00:44:28.618936 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-03-08 00:44:28.618945 | orchestrator | 2026-03-08 00:44:28.618954 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-03-08 00:44:28.618962 | orchestrator | Sunday 08 March 2026 00:44:07 +0000 (0:00:00.173) 0:00:00.351 ********** 2026-03-08 00:44:28.618971 | orchestrator | changed: [testbed-manager] 2026-03-08 00:44:28.618980 | orchestrator | 2026-03-08 00:44:28.618989 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-03-08 00:44:28.618997 | orchestrator | Sunday 08 March 2026 00:44:09 +0000 (0:00:01.088) 0:00:01.440 ********** 2026-03-08 00:44:28.619005 | orchestrator | changed: [testbed-manager] 2026-03-08 00:44:28.619013 | orchestrator | 2026-03-08 00:44:28.619022 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-03-08 00:44:28.619030 | orchestrator | Sunday 08 March 2026 00:44:18 +0000 (0:00:09.183) 0:00:10.623 ********** 2026-03-08 00:44:28.619038 | orchestrator | ok: [testbed-manager] 2026-03-08 00:44:28.619047 | orchestrator | 2026-03-08 00:44:28.619055 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-03-08 00:44:28.619063 | orchestrator | Sunday 08 March 2026 00:44:19 +0000 (0:00:01.005) 0:00:11.629 ********** 2026-03-08 00:44:28.619072 | orchestrator | changed: [testbed-manager] 2026-03-08 00:44:28.619098 | orchestrator | 2026-03-08 00:44:28.619107 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-03-08 00:44:28.619115 | orchestrator | Sunday 08 March 2026 00:44:20 +0000 (0:00:00.912) 0:00:12.542 ********** 2026-03-08 00:44:28.619124 | orchestrator | ok: [testbed-manager] 2026-03-08 00:44:28.619132 | orchestrator | 2026-03-08 00:44:28.619141 | orchestrator | TASK [osism.services.frr : Write frr_config_template to temporary file] ******** 2026-03-08 00:44:28.619204 | orchestrator | Sunday 08 March 2026 00:44:21 +0000 (0:00:01.146) 0:00:13.688 ********** 2026-03-08 00:44:28.619213 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:44:28.619221 | orchestrator | 2026-03-08 00:44:28.619229 | orchestrator | TASK [osism.services.frr : Render frr.conf from frr_config_template variable] *** 2026-03-08 00:44:28.619237 | orchestrator | Sunday 08 March 2026 00:44:21 +0000 (0:00:00.161) 0:00:13.850 ********** 2026-03-08 00:44:28.619245 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:44:28.619252 | orchestrator | 2026-03-08 00:44:28.619260 | orchestrator | TASK [osism.services.frr : Remove temporary frr_config_template file] ********** 2026-03-08 00:44:28.619268 | orchestrator | Sunday 08 March 2026 00:44:21 +0000 (0:00:00.145) 0:00:13.995 ********** 2026-03-08 00:44:28.619276 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:44:28.619284 | orchestrator | 2026-03-08 00:44:28.619292 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-03-08 00:44:28.619300 | orchestrator | Sunday 08 March 2026 00:44:21 +0000 (0:00:00.163) 0:00:14.159 ********** 2026-03-08 00:44:28.619308 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:44:28.619316 | orchestrator | 2026-03-08 00:44:28.619324 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-03-08 00:44:28.619333 | orchestrator | Sunday 08 March 2026 00:44:21 +0000 (0:00:00.140) 0:00:14.299 ********** 2026-03-08 00:44:28.619343 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:44:28.619352 | orchestrator | 2026-03-08 00:44:28.619361 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-03-08 00:44:28.619371 | orchestrator | Sunday 08 March 2026 00:44:22 +0000 (0:00:00.144) 0:00:14.444 ********** 2026-03-08 00:44:28.619380 | orchestrator | changed: [testbed-manager] 2026-03-08 00:44:28.619389 | orchestrator | 2026-03-08 00:44:28.619398 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-03-08 00:44:28.619407 | orchestrator | Sunday 08 March 2026 00:44:23 +0000 (0:00:01.279) 0:00:15.723 ********** 2026-03-08 00:44:28.619416 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-03-08 00:44:28.619425 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-03-08 00:44:28.619436 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-03-08 00:44:28.619445 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-03-08 00:44:28.619454 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-03-08 00:44:28.619464 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-03-08 00:44:28.619473 | orchestrator | 2026-03-08 00:44:28.619483 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-03-08 00:44:28.619492 | orchestrator | Sunday 08 March 2026 00:44:25 +0000 (0:00:02.295) 0:00:18.018 ********** 2026-03-08 00:44:28.619501 | orchestrator | ok: [testbed-manager] 2026-03-08 00:44:28.619510 | orchestrator | 2026-03-08 00:44:28.619520 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-03-08 00:44:28.619529 | orchestrator | Sunday 08 March 2026 00:44:26 +0000 (0:00:01.247) 0:00:19.266 ********** 2026-03-08 00:44:28.619538 | orchestrator | changed: [testbed-manager] 2026-03-08 00:44:28.619547 | orchestrator | 2026-03-08 00:44:28.619556 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:44:28.619573 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-08 00:44:28.619582 | orchestrator | 2026-03-08 00:44:28.619591 | orchestrator | 2026-03-08 00:44:28.619619 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 00:44:28.619630 | orchestrator | Sunday 08 March 2026 00:44:28 +0000 (0:00:01.430) 0:00:20.696 ********** 2026-03-08 00:44:28.619640 | orchestrator | =============================================================================== 2026-03-08 00:44:28.619649 | orchestrator | osism.services.frr : Install frr package -------------------------------- 9.18s 2026-03-08 00:44:28.619657 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.30s 2026-03-08 00:44:28.619667 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.43s 2026-03-08 00:44:28.619676 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 1.28s 2026-03-08 00:44:28.619685 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.25s 2026-03-08 00:44:28.619692 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.15s 2026-03-08 00:44:28.619700 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.09s 2026-03-08 00:44:28.619708 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.01s 2026-03-08 00:44:28.619716 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.91s 2026-03-08 00:44:28.619729 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.17s 2026-03-08 00:44:28.619742 | orchestrator | osism.services.frr : Remove temporary frr_config_template file ---------- 0.16s 2026-03-08 00:44:28.619755 | orchestrator | osism.services.frr : Write frr_config_template to temporary file -------- 0.16s 2026-03-08 00:44:28.619769 | orchestrator | osism.services.frr : Render frr.conf from frr_config_template variable --- 0.15s 2026-03-08 00:44:28.619782 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.14s 2026-03-08 00:44:28.619795 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.14s 2026-03-08 00:44:29.033137 | orchestrator | 2026-03-08 00:44:29.038301 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Sun Mar 8 00:44:29 UTC 2026 2026-03-08 00:44:29.038365 | orchestrator | 2026-03-08 00:44:30.963625 | orchestrator | 2026-03-08 00:44:30 | INFO  | Collection nutshell is prepared for execution 2026-03-08 00:44:30.963696 | orchestrator | 2026-03-08 00:44:30 | INFO  | A [0] - dotfiles 2026-03-08 00:44:41.040768 | orchestrator | 2026-03-08 00:44:41 | INFO  | A [0] - homer 2026-03-08 00:44:41.040846 | orchestrator | 2026-03-08 00:44:41 | INFO  | A [0] - netdata 2026-03-08 00:44:41.040854 | orchestrator | 2026-03-08 00:44:41 | INFO  | A [0] - openstackclient 2026-03-08 00:44:41.040859 | orchestrator | 2026-03-08 00:44:41 | INFO  | A [0] - phpmyadmin 2026-03-08 00:44:41.040863 | orchestrator | 2026-03-08 00:44:41 | INFO  | A [0] - common 2026-03-08 00:44:41.044282 | orchestrator | 2026-03-08 00:44:41 | INFO  | A [1] -- loadbalancer 2026-03-08 00:44:41.044491 | orchestrator | 2026-03-08 00:44:41 | INFO  | A [2] --- opensearch 2026-03-08 00:44:41.044648 | orchestrator | 2026-03-08 00:44:41 | INFO  | A [2] --- mariadb-ng 2026-03-08 00:44:41.044878 | orchestrator | 2026-03-08 00:44:41 | INFO  | A [3] ---- horizon 2026-03-08 00:44:41.045270 | orchestrator | 2026-03-08 00:44:41 | INFO  | A [3] ---- keystone 2026-03-08 00:44:41.045746 | orchestrator | 2026-03-08 00:44:41 | INFO  | A [4] ----- neutron 2026-03-08 00:44:41.046117 | orchestrator | 2026-03-08 00:44:41 | INFO  | A [5] ------ wait-for-nova 2026-03-08 00:44:41.046213 | orchestrator | 2026-03-08 00:44:41 | INFO  | A [6] ------- octavia 2026-03-08 00:44:41.047762 | orchestrator | 2026-03-08 00:44:41 | INFO  | A [4] ----- barbican 2026-03-08 00:44:41.047890 | orchestrator | 2026-03-08 00:44:41 | INFO  | A [4] ----- designate 2026-03-08 00:44:41.050519 | orchestrator | 2026-03-08 00:44:41 | INFO  | A [4] ----- ironic 2026-03-08 00:44:41.050549 | orchestrator | 2026-03-08 00:44:41 | INFO  | A [4] ----- placement 2026-03-08 00:44:41.050556 | orchestrator | 2026-03-08 00:44:41 | INFO  | A [4] ----- magnum 2026-03-08 00:44:41.050561 | orchestrator | 2026-03-08 00:44:41 | INFO  | A [1] -- openvswitch 2026-03-08 00:44:41.050567 | orchestrator | 2026-03-08 00:44:41 | INFO  | A [2] --- ovn 2026-03-08 00:44:41.050574 | orchestrator | 2026-03-08 00:44:41 | INFO  | A [1] -- memcached 2026-03-08 00:44:41.050579 | orchestrator | 2026-03-08 00:44:41 | INFO  | A [1] -- redis 2026-03-08 00:44:41.050585 | orchestrator | 2026-03-08 00:44:41 | INFO  | A [1] -- rabbitmq-ng 2026-03-08 00:44:41.050590 | orchestrator | 2026-03-08 00:44:41 | INFO  | A [0] - kubernetes 2026-03-08 00:44:41.052398 | orchestrator | 2026-03-08 00:44:41 | INFO  | A [1] -- kubeconfig 2026-03-08 00:44:41.052544 | orchestrator | 2026-03-08 00:44:41 | INFO  | A [1] -- copy-kubeconfig 2026-03-08 00:44:41.052750 | orchestrator | 2026-03-08 00:44:41 | INFO  | A [0] - ceph 2026-03-08 00:44:41.055065 | orchestrator | 2026-03-08 00:44:41 | INFO  | A [1] -- ceph-pools 2026-03-08 00:44:41.056572 | orchestrator | 2026-03-08 00:44:41 | INFO  | A [2] --- copy-ceph-keys 2026-03-08 00:44:41.061658 | orchestrator | 2026-03-08 00:44:41 | INFO  | A [3] ---- cephclient 2026-03-08 00:44:41.061784 | orchestrator | 2026-03-08 00:44:41 | INFO  | A [4] ----- ceph-bootstrap-dashboard 2026-03-08 00:44:41.061810 | orchestrator | 2026-03-08 00:44:41 | INFO  | A [4] ----- wait-for-keystone 2026-03-08 00:44:41.061815 | orchestrator | 2026-03-08 00:44:41 | INFO  | A [5] ------ kolla-ceph-rgw 2026-03-08 00:44:41.061819 | orchestrator | 2026-03-08 00:44:41 | INFO  | A [5] ------ glance 2026-03-08 00:44:41.061823 | orchestrator | 2026-03-08 00:44:41 | INFO  | A [5] ------ cinder 2026-03-08 00:44:41.061827 | orchestrator | 2026-03-08 00:44:41 | INFO  | A [5] ------ nova 2026-03-08 00:44:41.061831 | orchestrator | 2026-03-08 00:44:41 | INFO  | A [4] ----- prometheus 2026-03-08 00:44:41.061835 | orchestrator | 2026-03-08 00:44:41 | INFO  | A [5] ------ grafana 2026-03-08 00:44:41.274655 | orchestrator | 2026-03-08 00:44:41 | INFO  | All tasks of the collection nutshell are prepared for execution 2026-03-08 00:44:41.274760 | orchestrator | 2026-03-08 00:44:41 | INFO  | Tasks are running in the background 2026-03-08 00:44:44.156039 | orchestrator | 2026-03-08 00:44:44 | INFO  | No task IDs specified, wait for all currently running tasks 2026-03-08 00:44:46.272989 | orchestrator | 2026-03-08 00:44:46 | INFO  | Task b50dd512-2d13-471d-8f7d-c3bc8feaf194 is in state STARTED 2026-03-08 00:44:46.273277 | orchestrator | 2026-03-08 00:44:46 | INFO  | Task 9de37ddc-e3a5-4c24-9328-cc42b6b594d9 is in state STARTED 2026-03-08 00:44:46.273825 | orchestrator | 2026-03-08 00:44:46 | INFO  | Task 90275470-a5b4-491a-874f-4bf51d7bc505 is in state STARTED 2026-03-08 00:44:46.275663 | orchestrator | 2026-03-08 00:44:46 | INFO  | Task 8476ab99-d617-4114-a3c6-0ac93c9963c3 is in state STARTED 2026-03-08 00:44:46.276368 | orchestrator | 2026-03-08 00:44:46 | INFO  | Task 481a9ec5-f6ec-4960-8a07-6224e4ac4194 is in state STARTED 2026-03-08 00:44:46.278650 | orchestrator | 2026-03-08 00:44:46 | INFO  | Task 43be1861-786a-45a2-9d43-f8a294d16f84 is in state STARTED 2026-03-08 00:44:46.279115 | orchestrator | 2026-03-08 00:44:46 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:44:46.279192 | orchestrator | 2026-03-08 00:44:46 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:44:49.329511 | orchestrator | 2026-03-08 00:44:49 | INFO  | Task b50dd512-2d13-471d-8f7d-c3bc8feaf194 is in state STARTED 2026-03-08 00:44:49.329588 | orchestrator | 2026-03-08 00:44:49 | INFO  | Task 9de37ddc-e3a5-4c24-9328-cc42b6b594d9 is in state STARTED 2026-03-08 00:44:49.330547 | orchestrator | 2026-03-08 00:44:49 | INFO  | Task 90275470-a5b4-491a-874f-4bf51d7bc505 is in state STARTED 2026-03-08 00:44:49.336354 | orchestrator | 2026-03-08 00:44:49 | INFO  | Task 8476ab99-d617-4114-a3c6-0ac93c9963c3 is in state STARTED 2026-03-08 00:44:49.336440 | orchestrator | 2026-03-08 00:44:49 | INFO  | Task 481a9ec5-f6ec-4960-8a07-6224e4ac4194 is in state STARTED 2026-03-08 00:44:49.336467 | orchestrator | 2026-03-08 00:44:49 | INFO  | Task 43be1861-786a-45a2-9d43-f8a294d16f84 is in state STARTED 2026-03-08 00:44:49.339747 | orchestrator | 2026-03-08 00:44:49 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:44:49.339826 | orchestrator | 2026-03-08 00:44:49 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:44:52.371334 | orchestrator | 2026-03-08 00:44:52 | INFO  | Task b50dd512-2d13-471d-8f7d-c3bc8feaf194 is in state STARTED 2026-03-08 00:44:52.371444 | orchestrator | 2026-03-08 00:44:52 | INFO  | Task 9de37ddc-e3a5-4c24-9328-cc42b6b594d9 is in state STARTED 2026-03-08 00:44:52.371745 | orchestrator | 2026-03-08 00:44:52 | INFO  | Task 90275470-a5b4-491a-874f-4bf51d7bc505 is in state STARTED 2026-03-08 00:44:52.372386 | orchestrator | 2026-03-08 00:44:52 | INFO  | Task 8476ab99-d617-4114-a3c6-0ac93c9963c3 is in state STARTED 2026-03-08 00:44:52.372792 | orchestrator | 2026-03-08 00:44:52 | INFO  | Task 481a9ec5-f6ec-4960-8a07-6224e4ac4194 is in state STARTED 2026-03-08 00:44:52.373621 | orchestrator | 2026-03-08 00:44:52 | INFO  | Task 43be1861-786a-45a2-9d43-f8a294d16f84 is in state STARTED 2026-03-08 00:44:52.376538 | orchestrator | 2026-03-08 00:44:52 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:44:52.376575 | orchestrator | 2026-03-08 00:44:52 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:44:55.488702 | orchestrator | 2026-03-08 00:44:55 | INFO  | Task b50dd512-2d13-471d-8f7d-c3bc8feaf194 is in state STARTED 2026-03-08 00:44:55.488779 | orchestrator | 2026-03-08 00:44:55 | INFO  | Task 9de37ddc-e3a5-4c24-9328-cc42b6b594d9 is in state STARTED 2026-03-08 00:44:55.489068 | orchestrator | 2026-03-08 00:44:55 | INFO  | Task 90275470-a5b4-491a-874f-4bf51d7bc505 is in state STARTED 2026-03-08 00:44:55.489595 | orchestrator | 2026-03-08 00:44:55 | INFO  | Task 8476ab99-d617-4114-a3c6-0ac93c9963c3 is in state STARTED 2026-03-08 00:44:55.490010 | orchestrator | 2026-03-08 00:44:55 | INFO  | Task 481a9ec5-f6ec-4960-8a07-6224e4ac4194 is in state STARTED 2026-03-08 00:44:55.490497 | orchestrator | 2026-03-08 00:44:55 | INFO  | Task 43be1861-786a-45a2-9d43-f8a294d16f84 is in state STARTED 2026-03-08 00:44:55.490989 | orchestrator | 2026-03-08 00:44:55 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:44:55.491048 | orchestrator | 2026-03-08 00:44:55 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:44:58.539563 | orchestrator | 2026-03-08 00:44:58 | INFO  | Task b50dd512-2d13-471d-8f7d-c3bc8feaf194 is in state STARTED 2026-03-08 00:44:58.541950 | orchestrator | 2026-03-08 00:44:58 | INFO  | Task 9de37ddc-e3a5-4c24-9328-cc42b6b594d9 is in state STARTED 2026-03-08 00:44:58.542171 | orchestrator | 2026-03-08 00:44:58 | INFO  | Task 90275470-a5b4-491a-874f-4bf51d7bc505 is in state STARTED 2026-03-08 00:44:58.545347 | orchestrator | 2026-03-08 00:44:58 | INFO  | Task 8476ab99-d617-4114-a3c6-0ac93c9963c3 is in state STARTED 2026-03-08 00:44:58.545753 | orchestrator | 2026-03-08 00:44:58 | INFO  | Task 481a9ec5-f6ec-4960-8a07-6224e4ac4194 is in state STARTED 2026-03-08 00:44:58.547313 | orchestrator | 2026-03-08 00:44:58 | INFO  | Task 43be1861-786a-45a2-9d43-f8a294d16f84 is in state STARTED 2026-03-08 00:44:58.548700 | orchestrator | 2026-03-08 00:44:58 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:44:58.548744 | orchestrator | 2026-03-08 00:44:58 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:45:01.699459 | orchestrator | 2026-03-08 00:45:01 | INFO  | Task b50dd512-2d13-471d-8f7d-c3bc8feaf194 is in state STARTED 2026-03-08 00:45:01.699529 | orchestrator | 2026-03-08 00:45:01 | INFO  | Task 9de37ddc-e3a5-4c24-9328-cc42b6b594d9 is in state STARTED 2026-03-08 00:45:01.699535 | orchestrator | 2026-03-08 00:45:01 | INFO  | Task 90275470-a5b4-491a-874f-4bf51d7bc505 is in state STARTED 2026-03-08 00:45:01.699540 | orchestrator | 2026-03-08 00:45:01 | INFO  | Task 8476ab99-d617-4114-a3c6-0ac93c9963c3 is in state STARTED 2026-03-08 00:45:01.699544 | orchestrator | 2026-03-08 00:45:01 | INFO  | Task 481a9ec5-f6ec-4960-8a07-6224e4ac4194 is in state STARTED 2026-03-08 00:45:01.699548 | orchestrator | 2026-03-08 00:45:01 | INFO  | Task 43be1861-786a-45a2-9d43-f8a294d16f84 is in state STARTED 2026-03-08 00:45:01.699552 | orchestrator | 2026-03-08 00:45:01 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:45:01.699556 | orchestrator | 2026-03-08 00:45:01 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:45:04.721008 | orchestrator | 2026-03-08 00:45:04 | INFO  | Task b50dd512-2d13-471d-8f7d-c3bc8feaf194 is in state STARTED 2026-03-08 00:45:04.725202 | orchestrator | 2026-03-08 00:45:04 | INFO  | Task 9de37ddc-e3a5-4c24-9328-cc42b6b594d9 is in state STARTED 2026-03-08 00:45:04.729996 | orchestrator | 2026-03-08 00:45:04 | INFO  | Task 90275470-a5b4-491a-874f-4bf51d7bc505 is in state STARTED 2026-03-08 00:45:04.730866 | orchestrator | 2026-03-08 00:45:04 | INFO  | Task 8476ab99-d617-4114-a3c6-0ac93c9963c3 is in state STARTED 2026-03-08 00:45:04.732988 | orchestrator | 2026-03-08 00:45:04 | INFO  | Task 481a9ec5-f6ec-4960-8a07-6224e4ac4194 is in state STARTED 2026-03-08 00:45:04.733498 | orchestrator | 2026-03-08 00:45:04 | INFO  | Task 43be1861-786a-45a2-9d43-f8a294d16f84 is in state STARTED 2026-03-08 00:45:04.734436 | orchestrator | 2026-03-08 00:45:04 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:45:04.734960 | orchestrator | 2026-03-08 00:45:04 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:45:07.992340 | orchestrator | 2026-03-08 00:45:07 | INFO  | Task b50dd512-2d13-471d-8f7d-c3bc8feaf194 is in state STARTED 2026-03-08 00:45:07.992671 | orchestrator | 2026-03-08 00:45:07 | INFO  | Task 9de37ddc-e3a5-4c24-9328-cc42b6b594d9 is in state STARTED 2026-03-08 00:45:07.994844 | orchestrator | 2026-03-08 00:45:07 | INFO  | Task 90275470-a5b4-491a-874f-4bf51d7bc505 is in state STARTED 2026-03-08 00:45:07.995772 | orchestrator | 2026-03-08 00:45:07 | INFO  | Task 8476ab99-d617-4114-a3c6-0ac93c9963c3 is in state SUCCESS 2026-03-08 00:45:07.996037 | orchestrator | 2026-03-08 00:45:07.996057 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2026-03-08 00:45:07.996065 | orchestrator | 2026-03-08 00:45:07.996071 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2026-03-08 00:45:07.996079 | orchestrator | Sunday 08 March 2026 00:44:54 +0000 (0:00:00.827) 0:00:00.827 ********** 2026-03-08 00:45:07.996085 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:45:07.996150 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:45:07.996163 | orchestrator | changed: [testbed-manager] 2026-03-08 00:45:07.996174 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:45:07.996183 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:45:07.996189 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:45:07.996195 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:45:07.996201 | orchestrator | 2026-03-08 00:45:07.996208 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2026-03-08 00:45:07.996214 | orchestrator | Sunday 08 March 2026 00:44:58 +0000 (0:00:03.646) 0:00:04.474 ********** 2026-03-08 00:45:07.996220 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-03-08 00:45:07.996227 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-03-08 00:45:07.996233 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-03-08 00:45:07.996239 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-03-08 00:45:07.996245 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-03-08 00:45:07.996251 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-03-08 00:45:07.996257 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-03-08 00:45:07.996264 | orchestrator | 2026-03-08 00:45:07.996274 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2026-03-08 00:45:07.996285 | orchestrator | Sunday 08 March 2026 00:45:00 +0000 (0:00:02.251) 0:00:06.725 ********** 2026-03-08 00:45:07.996299 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-08 00:44:59.255794', 'end': '2026-03-08 00:44:59.262639', 'delta': '0:00:00.006845', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-08 00:45:07.996317 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-08 00:44:59.184439', 'end': '2026-03-08 00:44:59.190441', 'delta': '0:00:00.006002', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-08 00:45:07.996325 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-08 00:44:59.518677', 'end': '2026-03-08 00:45:00.527468', 'delta': '0:00:01.008791', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-08 00:45:07.996352 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-08 00:44:59.898042', 'end': '2026-03-08 00:44:59.907072', 'delta': '0:00:00.009030', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-08 00:45:07.996371 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-08 00:44:59.990774', 'end': '2026-03-08 00:45:00.001199', 'delta': '0:00:00.010425', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-08 00:45:07.996379 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-08 00:45:00.079607', 'end': '2026-03-08 00:45:00.085711', 'delta': '0:00:00.006104', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-08 00:45:07.996385 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-08 00:45:00.428563', 'end': '2026-03-08 00:45:00.437036', 'delta': '0:00:00.008473', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-08 00:45:07.996392 | orchestrator | 2026-03-08 00:45:07.996398 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2026-03-08 00:45:07.996405 | orchestrator | Sunday 08 March 2026 00:45:03 +0000 (0:00:02.469) 0:00:09.196 ********** 2026-03-08 00:45:07.996411 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-03-08 00:45:07.996417 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-03-08 00:45:07.996423 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-03-08 00:45:07.996429 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-03-08 00:45:07.996437 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-03-08 00:45:07.996447 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-03-08 00:45:07.996464 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-03-08 00:45:07.996474 | orchestrator | 2026-03-08 00:45:07.996484 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2026-03-08 00:45:07.996494 | orchestrator | Sunday 08 March 2026 00:45:05 +0000 (0:00:01.807) 0:00:11.003 ********** 2026-03-08 00:45:07.996503 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2026-03-08 00:45:07.996513 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2026-03-08 00:45:07.996522 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2026-03-08 00:45:07.996531 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2026-03-08 00:45:07.996541 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2026-03-08 00:45:07.996551 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2026-03-08 00:45:07.996560 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2026-03-08 00:45:07.996570 | orchestrator | 2026-03-08 00:45:07.996581 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:45:07.996599 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:45:07.996612 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:45:07.996627 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:45:07.996638 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:45:07.996645 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:45:07.996651 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:45:07.996657 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:45:07.996663 | orchestrator | 2026-03-08 00:45:07.996669 | orchestrator | 2026-03-08 00:45:07.996676 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 00:45:07.996682 | orchestrator | Sunday 08 March 2026 00:45:07 +0000 (0:00:02.193) 0:00:13.196 ********** 2026-03-08 00:45:07.996688 | orchestrator | =============================================================================== 2026-03-08 00:45:07.996694 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 3.65s 2026-03-08 00:45:07.996700 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.47s 2026-03-08 00:45:07.996706 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 2.25s 2026-03-08 00:45:07.996712 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 2.19s 2026-03-08 00:45:07.996719 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 1.81s 2026-03-08 00:45:08.000693 | orchestrator | 2026-03-08 00:45:07 | INFO  | Task 481a9ec5-f6ec-4960-8a07-6224e4ac4194 is in state STARTED 2026-03-08 00:45:08.003281 | orchestrator | 2026-03-08 00:45:08 | INFO  | Task 43be1861-786a-45a2-9d43-f8a294d16f84 is in state STARTED 2026-03-08 00:45:08.005205 | orchestrator | 2026-03-08 00:45:08 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:45:08.005405 | orchestrator | 2026-03-08 00:45:08 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:45:11.239694 | orchestrator | 2026-03-08 00:45:11 | INFO  | Task c61dad8a-377f-4e01-8a85-c3fe2f4480ea is in state STARTED 2026-03-08 00:45:11.240799 | orchestrator | 2026-03-08 00:45:11 | INFO  | Task b50dd512-2d13-471d-8f7d-c3bc8feaf194 is in state STARTED 2026-03-08 00:45:11.242519 | orchestrator | 2026-03-08 00:45:11 | INFO  | Task 9de37ddc-e3a5-4c24-9328-cc42b6b594d9 is in state STARTED 2026-03-08 00:45:11.244224 | orchestrator | 2026-03-08 00:45:11 | INFO  | Task 90275470-a5b4-491a-874f-4bf51d7bc505 is in state STARTED 2026-03-08 00:45:11.251077 | orchestrator | 2026-03-08 00:45:11 | INFO  | Task 481a9ec5-f6ec-4960-8a07-6224e4ac4194 is in state STARTED 2026-03-08 00:45:11.253896 | orchestrator | 2026-03-08 00:45:11 | INFO  | Task 43be1861-786a-45a2-9d43-f8a294d16f84 is in state STARTED 2026-03-08 00:45:11.256701 | orchestrator | 2026-03-08 00:45:11 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:45:11.256764 | orchestrator | 2026-03-08 00:45:11 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:45:14.290731 | orchestrator | 2026-03-08 00:45:14 | INFO  | Task c61dad8a-377f-4e01-8a85-c3fe2f4480ea is in state STARTED 2026-03-08 00:45:14.290820 | orchestrator | 2026-03-08 00:45:14 | INFO  | Task b50dd512-2d13-471d-8f7d-c3bc8feaf194 is in state STARTED 2026-03-08 00:45:14.292630 | orchestrator | 2026-03-08 00:45:14 | INFO  | Task 9de37ddc-e3a5-4c24-9328-cc42b6b594d9 is in state STARTED 2026-03-08 00:45:14.294949 | orchestrator | 2026-03-08 00:45:14 | INFO  | Task 90275470-a5b4-491a-874f-4bf51d7bc505 is in state STARTED 2026-03-08 00:45:14.296040 | orchestrator | 2026-03-08 00:45:14 | INFO  | Task 481a9ec5-f6ec-4960-8a07-6224e4ac4194 is in state STARTED 2026-03-08 00:45:14.298658 | orchestrator | 2026-03-08 00:45:14 | INFO  | Task 43be1861-786a-45a2-9d43-f8a294d16f84 is in state STARTED 2026-03-08 00:45:14.301094 | orchestrator | 2026-03-08 00:45:14 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:45:14.301171 | orchestrator | 2026-03-08 00:45:14 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:45:17.406672 | orchestrator | 2026-03-08 00:45:17 | INFO  | Task c61dad8a-377f-4e01-8a85-c3fe2f4480ea is in state STARTED 2026-03-08 00:45:17.411782 | orchestrator | 2026-03-08 00:45:17 | INFO  | Task b50dd512-2d13-471d-8f7d-c3bc8feaf194 is in state STARTED 2026-03-08 00:45:17.414624 | orchestrator | 2026-03-08 00:45:17 | INFO  | Task 9de37ddc-e3a5-4c24-9328-cc42b6b594d9 is in state STARTED 2026-03-08 00:45:17.415242 | orchestrator | 2026-03-08 00:45:17 | INFO  | Task 90275470-a5b4-491a-874f-4bf51d7bc505 is in state STARTED 2026-03-08 00:45:17.415616 | orchestrator | 2026-03-08 00:45:17 | INFO  | Task 481a9ec5-f6ec-4960-8a07-6224e4ac4194 is in state STARTED 2026-03-08 00:45:17.416198 | orchestrator | 2026-03-08 00:45:17 | INFO  | Task 43be1861-786a-45a2-9d43-f8a294d16f84 is in state STARTED 2026-03-08 00:45:17.420791 | orchestrator | 2026-03-08 00:45:17 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:45:17.420913 | orchestrator | 2026-03-08 00:45:17 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:45:20.491052 | orchestrator | 2026-03-08 00:45:20 | INFO  | Task c61dad8a-377f-4e01-8a85-c3fe2f4480ea is in state STARTED 2026-03-08 00:45:20.491196 | orchestrator | 2026-03-08 00:45:20 | INFO  | Task b50dd512-2d13-471d-8f7d-c3bc8feaf194 is in state STARTED 2026-03-08 00:45:20.492981 | orchestrator | 2026-03-08 00:45:20 | INFO  | Task 9de37ddc-e3a5-4c24-9328-cc42b6b594d9 is in state STARTED 2026-03-08 00:45:20.494233 | orchestrator | 2026-03-08 00:45:20 | INFO  | Task 90275470-a5b4-491a-874f-4bf51d7bc505 is in state STARTED 2026-03-08 00:45:20.495581 | orchestrator | 2026-03-08 00:45:20 | INFO  | Task 481a9ec5-f6ec-4960-8a07-6224e4ac4194 is in state STARTED 2026-03-08 00:45:20.498526 | orchestrator | 2026-03-08 00:45:20 | INFO  | Task 43be1861-786a-45a2-9d43-f8a294d16f84 is in state STARTED 2026-03-08 00:45:20.501304 | orchestrator | 2026-03-08 00:45:20 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:45:20.501342 | orchestrator | 2026-03-08 00:45:20 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:45:23.590362 | orchestrator | 2026-03-08 00:45:23 | INFO  | Task c61dad8a-377f-4e01-8a85-c3fe2f4480ea is in state STARTED 2026-03-08 00:45:23.595161 | orchestrator | 2026-03-08 00:45:23 | INFO  | Task b50dd512-2d13-471d-8f7d-c3bc8feaf194 is in state STARTED 2026-03-08 00:45:23.596669 | orchestrator | 2026-03-08 00:45:23 | INFO  | Task 9de37ddc-e3a5-4c24-9328-cc42b6b594d9 is in state STARTED 2026-03-08 00:45:23.600242 | orchestrator | 2026-03-08 00:45:23 | INFO  | Task 90275470-a5b4-491a-874f-4bf51d7bc505 is in state STARTED 2026-03-08 00:45:23.608986 | orchestrator | 2026-03-08 00:45:23 | INFO  | Task 481a9ec5-f6ec-4960-8a07-6224e4ac4194 is in state STARTED 2026-03-08 00:45:23.615528 | orchestrator | 2026-03-08 00:45:23 | INFO  | Task 43be1861-786a-45a2-9d43-f8a294d16f84 is in state STARTED 2026-03-08 00:45:23.625910 | orchestrator | 2026-03-08 00:45:23 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:45:23.625990 | orchestrator | 2026-03-08 00:45:23 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:45:26.733834 | orchestrator | 2026-03-08 00:45:26 | INFO  | Task c61dad8a-377f-4e01-8a85-c3fe2f4480ea is in state STARTED 2026-03-08 00:45:26.734055 | orchestrator | 2026-03-08 00:45:26 | INFO  | Task b50dd512-2d13-471d-8f7d-c3bc8feaf194 is in state STARTED 2026-03-08 00:45:26.734843 | orchestrator | 2026-03-08 00:45:26 | INFO  | Task 9de37ddc-e3a5-4c24-9328-cc42b6b594d9 is in state STARTED 2026-03-08 00:45:26.735431 | orchestrator | 2026-03-08 00:45:26 | INFO  | Task 90275470-a5b4-491a-874f-4bf51d7bc505 is in state STARTED 2026-03-08 00:45:26.736209 | orchestrator | 2026-03-08 00:45:26 | INFO  | Task 481a9ec5-f6ec-4960-8a07-6224e4ac4194 is in state STARTED 2026-03-08 00:45:26.736684 | orchestrator | 2026-03-08 00:45:26 | INFO  | Task 43be1861-786a-45a2-9d43-f8a294d16f84 is in state STARTED 2026-03-08 00:45:26.737378 | orchestrator | 2026-03-08 00:45:26 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:45:26.737399 | orchestrator | 2026-03-08 00:45:26 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:45:29.804525 | orchestrator | 2026-03-08 00:45:29 | INFO  | Task c61dad8a-377f-4e01-8a85-c3fe2f4480ea is in state STARTED 2026-03-08 00:45:29.804599 | orchestrator | 2026-03-08 00:45:29 | INFO  | Task b50dd512-2d13-471d-8f7d-c3bc8feaf194 is in state STARTED 2026-03-08 00:45:29.804605 | orchestrator | 2026-03-08 00:45:29 | INFO  | Task 9de37ddc-e3a5-4c24-9328-cc42b6b594d9 is in state STARTED 2026-03-08 00:45:29.804610 | orchestrator | 2026-03-08 00:45:29 | INFO  | Task 90275470-a5b4-491a-874f-4bf51d7bc505 is in state STARTED 2026-03-08 00:45:29.804614 | orchestrator | 2026-03-08 00:45:29 | INFO  | Task 481a9ec5-f6ec-4960-8a07-6224e4ac4194 is in state STARTED 2026-03-08 00:45:29.804618 | orchestrator | 2026-03-08 00:45:29 | INFO  | Task 43be1861-786a-45a2-9d43-f8a294d16f84 is in state STARTED 2026-03-08 00:45:29.804623 | orchestrator | 2026-03-08 00:45:29 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:45:29.804627 | orchestrator | 2026-03-08 00:45:29 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:45:32.900023 | orchestrator | 2026-03-08 00:45:32 | INFO  | Task c61dad8a-377f-4e01-8a85-c3fe2f4480ea is in state STARTED 2026-03-08 00:45:32.900085 | orchestrator | 2026-03-08 00:45:32 | INFO  | Task b50dd512-2d13-471d-8f7d-c3bc8feaf194 is in state SUCCESS 2026-03-08 00:45:32.900133 | orchestrator | 2026-03-08 00:45:32 | INFO  | Task 9de37ddc-e3a5-4c24-9328-cc42b6b594d9 is in state STARTED 2026-03-08 00:45:32.900141 | orchestrator | 2026-03-08 00:45:32 | INFO  | Task 90275470-a5b4-491a-874f-4bf51d7bc505 is in state STARTED 2026-03-08 00:45:32.900147 | orchestrator | 2026-03-08 00:45:32 | INFO  | Task 481a9ec5-f6ec-4960-8a07-6224e4ac4194 is in state STARTED 2026-03-08 00:45:32.900153 | orchestrator | 2026-03-08 00:45:32 | INFO  | Task 43be1861-786a-45a2-9d43-f8a294d16f84 is in state STARTED 2026-03-08 00:45:32.920597 | orchestrator | 2026-03-08 00:45:32 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:45:32.920689 | orchestrator | 2026-03-08 00:45:32 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:45:35.943970 | orchestrator | 2026-03-08 00:45:35 | INFO  | Task c61dad8a-377f-4e01-8a85-c3fe2f4480ea is in state STARTED 2026-03-08 00:45:35.944151 | orchestrator | 2026-03-08 00:45:35 | INFO  | Task 9de37ddc-e3a5-4c24-9328-cc42b6b594d9 is in state STARTED 2026-03-08 00:45:35.945469 | orchestrator | 2026-03-08 00:45:35 | INFO  | Task 90275470-a5b4-491a-874f-4bf51d7bc505 is in state STARTED 2026-03-08 00:45:35.947307 | orchestrator | 2026-03-08 00:45:35 | INFO  | Task 481a9ec5-f6ec-4960-8a07-6224e4ac4194 is in state STARTED 2026-03-08 00:45:35.948980 | orchestrator | 2026-03-08 00:45:35 | INFO  | Task 43be1861-786a-45a2-9d43-f8a294d16f84 is in state STARTED 2026-03-08 00:45:35.951702 | orchestrator | 2026-03-08 00:45:35 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:45:35.951776 | orchestrator | 2026-03-08 00:45:35 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:45:39.060719 | orchestrator | 2026-03-08 00:45:39 | INFO  | Task c61dad8a-377f-4e01-8a85-c3fe2f4480ea is in state STARTED 2026-03-08 00:45:39.060822 | orchestrator | 2026-03-08 00:45:39 | INFO  | Task 9de37ddc-e3a5-4c24-9328-cc42b6b594d9 is in state STARTED 2026-03-08 00:45:39.060835 | orchestrator | 2026-03-08 00:45:39 | INFO  | Task 90275470-a5b4-491a-874f-4bf51d7bc505 is in state STARTED 2026-03-08 00:45:39.060844 | orchestrator | 2026-03-08 00:45:39 | INFO  | Task 481a9ec5-f6ec-4960-8a07-6224e4ac4194 is in state STARTED 2026-03-08 00:45:39.060852 | orchestrator | 2026-03-08 00:45:39 | INFO  | Task 43be1861-786a-45a2-9d43-f8a294d16f84 is in state STARTED 2026-03-08 00:45:39.060860 | orchestrator | 2026-03-08 00:45:39 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:45:39.060868 | orchestrator | 2026-03-08 00:45:39 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:45:42.072077 | orchestrator | 2026-03-08 00:45:42 | INFO  | Task c61dad8a-377f-4e01-8a85-c3fe2f4480ea is in state STARTED 2026-03-08 00:45:42.073198 | orchestrator | 2026-03-08 00:45:42 | INFO  | Task 9de37ddc-e3a5-4c24-9328-cc42b6b594d9 is in state STARTED 2026-03-08 00:45:42.073642 | orchestrator | 2026-03-08 00:45:42 | INFO  | Task 90275470-a5b4-491a-874f-4bf51d7bc505 is in state STARTED 2026-03-08 00:45:42.074688 | orchestrator | 2026-03-08 00:45:42 | INFO  | Task 481a9ec5-f6ec-4960-8a07-6224e4ac4194 is in state STARTED 2026-03-08 00:45:42.076054 | orchestrator | 2026-03-08 00:45:42 | INFO  | Task 43be1861-786a-45a2-9d43-f8a294d16f84 is in state STARTED 2026-03-08 00:45:42.076691 | orchestrator | 2026-03-08 00:45:42 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:45:42.076744 | orchestrator | 2026-03-08 00:45:42 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:45:45.139174 | orchestrator | 2026-03-08 00:45:45 | INFO  | Task c61dad8a-377f-4e01-8a85-c3fe2f4480ea is in state STARTED 2026-03-08 00:45:45.139336 | orchestrator | 2026-03-08 00:45:45 | INFO  | Task 9de37ddc-e3a5-4c24-9328-cc42b6b594d9 is in state STARTED 2026-03-08 00:45:45.140350 | orchestrator | 2026-03-08 00:45:45 | INFO  | Task 90275470-a5b4-491a-874f-4bf51d7bc505 is in state STARTED 2026-03-08 00:45:45.140620 | orchestrator | 2026-03-08 00:45:45 | INFO  | Task 481a9ec5-f6ec-4960-8a07-6224e4ac4194 is in state SUCCESS 2026-03-08 00:45:45.143638 | orchestrator | 2026-03-08 00:45:45 | INFO  | Task 43be1861-786a-45a2-9d43-f8a294d16f84 is in state STARTED 2026-03-08 00:45:45.144244 | orchestrator | 2026-03-08 00:45:45 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:45:45.144277 | orchestrator | 2026-03-08 00:45:45 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:45:48.258339 | orchestrator | 2026-03-08 00:45:48 | INFO  | Task c61dad8a-377f-4e01-8a85-c3fe2f4480ea is in state STARTED 2026-03-08 00:45:48.309908 | orchestrator | 2026-03-08 00:45:48 | INFO  | Task 9de37ddc-e3a5-4c24-9328-cc42b6b594d9 is in state STARTED 2026-03-08 00:45:48.313947 | orchestrator | 2026-03-08 00:45:48 | INFO  | Task 90275470-a5b4-491a-874f-4bf51d7bc505 is in state STARTED 2026-03-08 00:45:48.314049 | orchestrator | 2026-03-08 00:45:48 | INFO  | Task 43be1861-786a-45a2-9d43-f8a294d16f84 is in state STARTED 2026-03-08 00:45:48.315373 | orchestrator | 2026-03-08 00:45:48 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:45:48.315404 | orchestrator | 2026-03-08 00:45:48 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:45:51.407526 | orchestrator | 2026-03-08 00:45:51 | INFO  | Task c61dad8a-377f-4e01-8a85-c3fe2f4480ea is in state STARTED 2026-03-08 00:45:51.409150 | orchestrator | 2026-03-08 00:45:51 | INFO  | Task 9de37ddc-e3a5-4c24-9328-cc42b6b594d9 is in state STARTED 2026-03-08 00:45:51.411198 | orchestrator | 2026-03-08 00:45:51 | INFO  | Task 90275470-a5b4-491a-874f-4bf51d7bc505 is in state STARTED 2026-03-08 00:45:51.413246 | orchestrator | 2026-03-08 00:45:51 | INFO  | Task 43be1861-786a-45a2-9d43-f8a294d16f84 is in state STARTED 2026-03-08 00:45:51.414613 | orchestrator | 2026-03-08 00:45:51 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:45:51.414665 | orchestrator | 2026-03-08 00:45:51 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:45:54.454425 | orchestrator | 2026-03-08 00:45:54 | INFO  | Task c61dad8a-377f-4e01-8a85-c3fe2f4480ea is in state STARTED 2026-03-08 00:45:54.455890 | orchestrator | 2026-03-08 00:45:54 | INFO  | Task 9de37ddc-e3a5-4c24-9328-cc42b6b594d9 is in state STARTED 2026-03-08 00:45:54.457910 | orchestrator | 2026-03-08 00:45:54 | INFO  | Task 90275470-a5b4-491a-874f-4bf51d7bc505 is in state STARTED 2026-03-08 00:45:54.459779 | orchestrator | 2026-03-08 00:45:54 | INFO  | Task 43be1861-786a-45a2-9d43-f8a294d16f84 is in state STARTED 2026-03-08 00:45:54.461839 | orchestrator | 2026-03-08 00:45:54 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:45:54.461911 | orchestrator | 2026-03-08 00:45:54 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:45:57.500393 | orchestrator | 2026-03-08 00:45:57 | INFO  | Task c61dad8a-377f-4e01-8a85-c3fe2f4480ea is in state STARTED 2026-03-08 00:45:57.502604 | orchestrator | 2026-03-08 00:45:57 | INFO  | Task 9de37ddc-e3a5-4c24-9328-cc42b6b594d9 is in state STARTED 2026-03-08 00:45:57.504384 | orchestrator | 2026-03-08 00:45:57 | INFO  | Task 90275470-a5b4-491a-874f-4bf51d7bc505 is in state STARTED 2026-03-08 00:45:57.507596 | orchestrator | 2026-03-08 00:45:57 | INFO  | Task 43be1861-786a-45a2-9d43-f8a294d16f84 is in state STARTED 2026-03-08 00:45:57.515560 | orchestrator | 2026-03-08 00:45:57 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:45:57.515650 | orchestrator | 2026-03-08 00:45:57 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:46:00.597059 | orchestrator | 2026-03-08 00:46:00 | INFO  | Task c61dad8a-377f-4e01-8a85-c3fe2f4480ea is in state STARTED 2026-03-08 00:46:00.602722 | orchestrator | 2026-03-08 00:46:00 | INFO  | Task 9de37ddc-e3a5-4c24-9328-cc42b6b594d9 is in state STARTED 2026-03-08 00:46:00.605170 | orchestrator | 2026-03-08 00:46:00 | INFO  | Task 90275470-a5b4-491a-874f-4bf51d7bc505 is in state STARTED 2026-03-08 00:46:00.607455 | orchestrator | 2026-03-08 00:46:00 | INFO  | Task 43be1861-786a-45a2-9d43-f8a294d16f84 is in state STARTED 2026-03-08 00:46:00.610240 | orchestrator | 2026-03-08 00:46:00 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:46:00.610441 | orchestrator | 2026-03-08 00:46:00 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:46:03.643452 | orchestrator | 2026-03-08 00:46:03 | INFO  | Task c61dad8a-377f-4e01-8a85-c3fe2f4480ea is in state STARTED 2026-03-08 00:46:03.644221 | orchestrator | 2026-03-08 00:46:03 | INFO  | Task 9de37ddc-e3a5-4c24-9328-cc42b6b594d9 is in state STARTED 2026-03-08 00:46:03.647093 | orchestrator | 2026-03-08 00:46:03 | INFO  | Task 90275470-a5b4-491a-874f-4bf51d7bc505 is in state STARTED 2026-03-08 00:46:03.649015 | orchestrator | 2026-03-08 00:46:03 | INFO  | Task 43be1861-786a-45a2-9d43-f8a294d16f84 is in state STARTED 2026-03-08 00:46:03.650044 | orchestrator | 2026-03-08 00:46:03 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:46:03.650232 | orchestrator | 2026-03-08 00:46:03 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:46:06.760406 | orchestrator | 2026-03-08 00:46:06 | INFO  | Task c61dad8a-377f-4e01-8a85-c3fe2f4480ea is in state STARTED 2026-03-08 00:46:06.762444 | orchestrator | 2026-03-08 00:46:06 | INFO  | Task 9de37ddc-e3a5-4c24-9328-cc42b6b594d9 is in state STARTED 2026-03-08 00:46:06.763689 | orchestrator | 2026-03-08 00:46:06 | INFO  | Task 90275470-a5b4-491a-874f-4bf51d7bc505 is in state STARTED 2026-03-08 00:46:06.765668 | orchestrator | 2026-03-08 00:46:06 | INFO  | Task 43be1861-786a-45a2-9d43-f8a294d16f84 is in state STARTED 2026-03-08 00:46:06.767165 | orchestrator | 2026-03-08 00:46:06 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:46:06.767304 | orchestrator | 2026-03-08 00:46:06 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:46:09.808405 | orchestrator | 2026-03-08 00:46:09 | INFO  | Task c61dad8a-377f-4e01-8a85-c3fe2f4480ea is in state STARTED 2026-03-08 00:46:09.818448 | orchestrator | 2026-03-08 00:46:09 | INFO  | Task 9de37ddc-e3a5-4c24-9328-cc42b6b594d9 is in state STARTED 2026-03-08 00:46:09.823205 | orchestrator | 2026-03-08 00:46:09 | INFO  | Task 90275470-a5b4-491a-874f-4bf51d7bc505 is in state STARTED 2026-03-08 00:46:09.825286 | orchestrator | 2026-03-08 00:46:09 | INFO  | Task 43be1861-786a-45a2-9d43-f8a294d16f84 is in state STARTED 2026-03-08 00:46:09.827306 | orchestrator | 2026-03-08 00:46:09 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:46:09.827350 | orchestrator | 2026-03-08 00:46:09 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:46:12.897575 | orchestrator | 2026-03-08 00:46:12 | INFO  | Task c61dad8a-377f-4e01-8a85-c3fe2f4480ea is in state STARTED 2026-03-08 00:46:12.898167 | orchestrator | 2026-03-08 00:46:12 | INFO  | Task 9de37ddc-e3a5-4c24-9328-cc42b6b594d9 is in state STARTED 2026-03-08 00:46:12.899154 | orchestrator | 2026-03-08 00:46:12 | INFO  | Task 90275470-a5b4-491a-874f-4bf51d7bc505 is in state STARTED 2026-03-08 00:46:12.899828 | orchestrator | 2026-03-08 00:46:12 | INFO  | Task 43be1861-786a-45a2-9d43-f8a294d16f84 is in state STARTED 2026-03-08 00:46:12.901868 | orchestrator | 2026-03-08 00:46:12 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:46:12.901891 | orchestrator | 2026-03-08 00:46:12 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:46:15.937340 | orchestrator | 2026-03-08 00:46:15 | INFO  | Task c61dad8a-377f-4e01-8a85-c3fe2f4480ea is in state STARTED 2026-03-08 00:46:15.938411 | orchestrator | 2026-03-08 00:46:15 | INFO  | Task 9de37ddc-e3a5-4c24-9328-cc42b6b594d9 is in state STARTED 2026-03-08 00:46:15.939472 | orchestrator | 2026-03-08 00:46:15 | INFO  | Task 90275470-a5b4-491a-874f-4bf51d7bc505 is in state STARTED 2026-03-08 00:46:15.940624 | orchestrator | 2026-03-08 00:46:15 | INFO  | Task 43be1861-786a-45a2-9d43-f8a294d16f84 is in state STARTED 2026-03-08 00:46:15.941481 | orchestrator | 2026-03-08 00:46:15 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:46:15.941569 | orchestrator | 2026-03-08 00:46:15 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:46:18.988496 | orchestrator | 2026-03-08 00:46:18.988591 | orchestrator | 2026-03-08 00:46:18.988603 | orchestrator | PLAY [Apply role homer] ******************************************************** 2026-03-08 00:46:18.988611 | orchestrator | 2026-03-08 00:46:18.988618 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2026-03-08 00:46:18.988627 | orchestrator | Sunday 08 March 2026 00:44:53 +0000 (0:00:00.569) 0:00:00.569 ********** 2026-03-08 00:46:18.988723 | orchestrator | ok: [testbed-manager] => { 2026-03-08 00:46:18.988735 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2026-03-08 00:46:18.988744 | orchestrator | } 2026-03-08 00:46:18.988750 | orchestrator | 2026-03-08 00:46:18.988757 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2026-03-08 00:46:18.988764 | orchestrator | Sunday 08 March 2026 00:44:54 +0000 (0:00:00.400) 0:00:00.970 ********** 2026-03-08 00:46:18.988769 | orchestrator | ok: [testbed-manager] 2026-03-08 00:46:18.988777 | orchestrator | 2026-03-08 00:46:18.988783 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2026-03-08 00:46:18.988789 | orchestrator | Sunday 08 March 2026 00:44:55 +0000 (0:00:01.822) 0:00:02.792 ********** 2026-03-08 00:46:18.988796 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2026-03-08 00:46:18.988803 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2026-03-08 00:46:18.988809 | orchestrator | 2026-03-08 00:46:18.988816 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2026-03-08 00:46:18.988822 | orchestrator | Sunday 08 March 2026 00:44:57 +0000 (0:00:01.603) 0:00:04.395 ********** 2026-03-08 00:46:18.988828 | orchestrator | changed: [testbed-manager] 2026-03-08 00:46:18.988835 | orchestrator | 2026-03-08 00:46:18.988841 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2026-03-08 00:46:18.988847 | orchestrator | Sunday 08 March 2026 00:45:00 +0000 (0:00:02.706) 0:00:07.102 ********** 2026-03-08 00:46:18.988854 | orchestrator | changed: [testbed-manager] 2026-03-08 00:46:18.988861 | orchestrator | 2026-03-08 00:46:18.988867 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2026-03-08 00:46:18.988875 | orchestrator | Sunday 08 March 2026 00:45:01 +0000 (0:00:01.356) 0:00:08.459 ********** 2026-03-08 00:46:18.988882 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2026-03-08 00:46:18.988890 | orchestrator | ok: [testbed-manager] 2026-03-08 00:46:18.988896 | orchestrator | 2026-03-08 00:46:18.988902 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2026-03-08 00:46:18.988908 | orchestrator | Sunday 08 March 2026 00:45:27 +0000 (0:00:25.797) 0:00:34.257 ********** 2026-03-08 00:46:18.988936 | orchestrator | changed: [testbed-manager] 2026-03-08 00:46:18.988943 | orchestrator | 2026-03-08 00:46:18.988951 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:46:18.988959 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:46:18.988967 | orchestrator | 2026-03-08 00:46:18.988974 | orchestrator | 2026-03-08 00:46:18.988981 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 00:46:18.988989 | orchestrator | Sunday 08 March 2026 00:45:32 +0000 (0:00:05.007) 0:00:39.264 ********** 2026-03-08 00:46:18.988995 | orchestrator | =============================================================================== 2026-03-08 00:46:18.989013 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 25.80s 2026-03-08 00:46:18.989019 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 5.01s 2026-03-08 00:46:18.989024 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.71s 2026-03-08 00:46:18.989030 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.82s 2026-03-08 00:46:18.989036 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.60s 2026-03-08 00:46:18.989042 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.36s 2026-03-08 00:46:18.989067 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.40s 2026-03-08 00:46:18.989074 | orchestrator | 2026-03-08 00:46:18.989080 | orchestrator | 2026-03-08 00:46:18.989086 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-03-08 00:46:18.989092 | orchestrator | 2026-03-08 00:46:18.989097 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-03-08 00:46:18.989103 | orchestrator | Sunday 08 March 2026 00:44:54 +0000 (0:00:00.645) 0:00:00.645 ********** 2026-03-08 00:46:18.989110 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-03-08 00:46:18.989118 | orchestrator | 2026-03-08 00:46:18.989124 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-03-08 00:46:18.989130 | orchestrator | Sunday 08 March 2026 00:44:54 +0000 (0:00:00.360) 0:00:01.008 ********** 2026-03-08 00:46:18.989136 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-03-08 00:46:18.989142 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-03-08 00:46:18.989149 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-03-08 00:46:18.989156 | orchestrator | 2026-03-08 00:46:18.989162 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-03-08 00:46:18.989168 | orchestrator | Sunday 08 March 2026 00:44:56 +0000 (0:00:02.311) 0:00:03.320 ********** 2026-03-08 00:46:18.989174 | orchestrator | changed: [testbed-manager] 2026-03-08 00:46:18.989180 | orchestrator | 2026-03-08 00:46:18.989186 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-03-08 00:46:18.989193 | orchestrator | Sunday 08 March 2026 00:44:58 +0000 (0:00:01.788) 0:00:05.108 ********** 2026-03-08 00:46:18.989216 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-03-08 00:46:18.989223 | orchestrator | ok: [testbed-manager] 2026-03-08 00:46:18.989229 | orchestrator | 2026-03-08 00:46:18.989235 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-03-08 00:46:18.989242 | orchestrator | Sunday 08 March 2026 00:45:32 +0000 (0:00:34.078) 0:00:39.187 ********** 2026-03-08 00:46:18.989248 | orchestrator | changed: [testbed-manager] 2026-03-08 00:46:18.989254 | orchestrator | 2026-03-08 00:46:18.989260 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-03-08 00:46:18.989266 | orchestrator | Sunday 08 March 2026 00:45:33 +0000 (0:00:01.067) 0:00:40.255 ********** 2026-03-08 00:46:18.989279 | orchestrator | ok: [testbed-manager] 2026-03-08 00:46:18.989285 | orchestrator | 2026-03-08 00:46:18.989291 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-03-08 00:46:18.989297 | orchestrator | Sunday 08 March 2026 00:45:34 +0000 (0:00:00.644) 0:00:40.899 ********** 2026-03-08 00:46:18.989303 | orchestrator | changed: [testbed-manager] 2026-03-08 00:46:18.989309 | orchestrator | 2026-03-08 00:46:18.989315 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-03-08 00:46:18.989321 | orchestrator | Sunday 08 March 2026 00:45:37 +0000 (0:00:02.960) 0:00:43.860 ********** 2026-03-08 00:46:18.989329 | orchestrator | changed: [testbed-manager] 2026-03-08 00:46:18.989336 | orchestrator | 2026-03-08 00:46:18.989342 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-03-08 00:46:18.989349 | orchestrator | Sunday 08 March 2026 00:45:39 +0000 (0:00:02.239) 0:00:46.099 ********** 2026-03-08 00:46:18.989356 | orchestrator | changed: [testbed-manager] 2026-03-08 00:46:18.989361 | orchestrator | 2026-03-08 00:46:18.989367 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-03-08 00:46:18.989372 | orchestrator | Sunday 08 March 2026 00:45:41 +0000 (0:00:01.567) 0:00:47.666 ********** 2026-03-08 00:46:18.989378 | orchestrator | ok: [testbed-manager] 2026-03-08 00:46:18.989384 | orchestrator | 2026-03-08 00:46:18.989390 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:46:18.989443 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:46:18.989451 | orchestrator | 2026-03-08 00:46:18.989491 | orchestrator | 2026-03-08 00:46:18.989498 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 00:46:18.989504 | orchestrator | Sunday 08 March 2026 00:45:41 +0000 (0:00:00.652) 0:00:48.319 ********** 2026-03-08 00:46:18.989511 | orchestrator | =============================================================================== 2026-03-08 00:46:18.989516 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 34.08s 2026-03-08 00:46:18.989523 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.96s 2026-03-08 00:46:18.989530 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.31s 2026-03-08 00:46:18.989536 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 2.24s 2026-03-08 00:46:18.989543 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.79s 2026-03-08 00:46:18.989549 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 1.57s 2026-03-08 00:46:18.989556 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.07s 2026-03-08 00:46:18.989569 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.65s 2026-03-08 00:46:18.989575 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.64s 2026-03-08 00:46:18.989581 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.36s 2026-03-08 00:46:18.989588 | orchestrator | 2026-03-08 00:46:18.989594 | orchestrator | 2026-03-08 00:46:18.989600 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2026-03-08 00:46:18.989606 | orchestrator | 2026-03-08 00:46:18.989612 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2026-03-08 00:46:18.989619 | orchestrator | Sunday 08 March 2026 00:45:12 +0000 (0:00:00.268) 0:00:00.268 ********** 2026-03-08 00:46:18.989625 | orchestrator | ok: [testbed-manager] 2026-03-08 00:46:18.989632 | orchestrator | 2026-03-08 00:46:18.989639 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2026-03-08 00:46:18.989645 | orchestrator | Sunday 08 March 2026 00:45:12 +0000 (0:00:00.841) 0:00:01.109 ********** 2026-03-08 00:46:18.989652 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2026-03-08 00:46:18.989658 | orchestrator | 2026-03-08 00:46:18.989665 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2026-03-08 00:46:18.989679 | orchestrator | Sunday 08 March 2026 00:45:13 +0000 (0:00:00.982) 0:00:02.092 ********** 2026-03-08 00:46:18.989685 | orchestrator | changed: [testbed-manager] 2026-03-08 00:46:18.989691 | orchestrator | 2026-03-08 00:46:18.989697 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2026-03-08 00:46:18.989704 | orchestrator | Sunday 08 March 2026 00:45:15 +0000 (0:00:01.162) 0:00:03.254 ********** 2026-03-08 00:46:18.989710 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2026-03-08 00:46:18.989716 | orchestrator | ok: [testbed-manager] 2026-03-08 00:46:18.989722 | orchestrator | 2026-03-08 00:46:18.989728 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2026-03-08 00:46:18.989735 | orchestrator | Sunday 08 March 2026 00:46:08 +0000 (0:00:52.970) 0:00:56.225 ********** 2026-03-08 00:46:18.989742 | orchestrator | changed: [testbed-manager] 2026-03-08 00:46:18.989748 | orchestrator | 2026-03-08 00:46:18.989755 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:46:18.989761 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:46:18.989768 | orchestrator | 2026-03-08 00:46:18.989775 | orchestrator | 2026-03-08 00:46:18.989781 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 00:46:18.989797 | orchestrator | Sunday 08 March 2026 00:46:15 +0000 (0:00:07.876) 0:01:04.102 ********** 2026-03-08 00:46:18.989804 | orchestrator | =============================================================================== 2026-03-08 00:46:18.989811 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 52.97s 2026-03-08 00:46:18.989817 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 7.88s 2026-03-08 00:46:18.989823 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.16s 2026-03-08 00:46:18.989829 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.98s 2026-03-08 00:46:18.989835 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 0.84s 2026-03-08 00:46:18.989841 | orchestrator | 2026-03-08 00:46:18 | INFO  | Task c61dad8a-377f-4e01-8a85-c3fe2f4480ea is in state SUCCESS 2026-03-08 00:46:18.989848 | orchestrator | 2026-03-08 00:46:18 | INFO  | Task 9de37ddc-e3a5-4c24-9328-cc42b6b594d9 is in state STARTED 2026-03-08 00:46:18.989941 | orchestrator | 2026-03-08 00:46:18 | INFO  | Task 90275470-a5b4-491a-874f-4bf51d7bc505 is in state STARTED 2026-03-08 00:46:18.989947 | orchestrator | 2026-03-08 00:46:18 | INFO  | Task 43be1861-786a-45a2-9d43-f8a294d16f84 is in state STARTED 2026-03-08 00:46:18.990397 | orchestrator | 2026-03-08 00:46:18 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:46:18.990487 | orchestrator | 2026-03-08 00:46:18 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:46:22.083697 | orchestrator | 2026-03-08 00:46:22 | INFO  | Task 9de37ddc-e3a5-4c24-9328-cc42b6b594d9 is in state STARTED 2026-03-08 00:46:22.100917 | orchestrator | 2026-03-08 00:46:22 | INFO  | Task 90275470-a5b4-491a-874f-4bf51d7bc505 is in state STARTED 2026-03-08 00:46:22.101042 | orchestrator | 2026-03-08 00:46:22 | INFO  | Task 43be1861-786a-45a2-9d43-f8a294d16f84 is in state STARTED 2026-03-08 00:46:22.101119 | orchestrator | 2026-03-08 00:46:22 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:46:22.101139 | orchestrator | 2026-03-08 00:46:22 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:46:25.141422 | orchestrator | 2026-03-08 00:46:25 | INFO  | Task 9de37ddc-e3a5-4c24-9328-cc42b6b594d9 is in state STARTED 2026-03-08 00:46:25.142108 | orchestrator | 2026-03-08 00:46:25 | INFO  | Task 90275470-a5b4-491a-874f-4bf51d7bc505 is in state STARTED 2026-03-08 00:46:25.142540 | orchestrator | 2026-03-08 00:46:25 | INFO  | Task 43be1861-786a-45a2-9d43-f8a294d16f84 is in state STARTED 2026-03-08 00:46:25.143636 | orchestrator | 2026-03-08 00:46:25 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:46:25.143673 | orchestrator | 2026-03-08 00:46:25 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:46:28.205474 | orchestrator | 2026-03-08 00:46:28 | INFO  | Task 9de37ddc-e3a5-4c24-9328-cc42b6b594d9 is in state STARTED 2026-03-08 00:46:28.211981 | orchestrator | 2026-03-08 00:46:28 | INFO  | Task 90275470-a5b4-491a-874f-4bf51d7bc505 is in state STARTED 2026-03-08 00:46:28.225884 | orchestrator | 2026-03-08 00:46:28 | INFO  | Task 43be1861-786a-45a2-9d43-f8a294d16f84 is in state STARTED 2026-03-08 00:46:28.230533 | orchestrator | 2026-03-08 00:46:28 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:46:28.232549 | orchestrator | 2026-03-08 00:46:28 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:46:31.309355 | orchestrator | 2026-03-08 00:46:31 | INFO  | Task 9de37ddc-e3a5-4c24-9328-cc42b6b594d9 is in state STARTED 2026-03-08 00:46:31.309836 | orchestrator | 2026-03-08 00:46:31 | INFO  | Task 90275470-a5b4-491a-874f-4bf51d7bc505 is in state STARTED 2026-03-08 00:46:31.311394 | orchestrator | 2026-03-08 00:46:31 | INFO  | Task 43be1861-786a-45a2-9d43-f8a294d16f84 is in state STARTED 2026-03-08 00:46:31.313394 | orchestrator | 2026-03-08 00:46:31 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:46:31.313934 | orchestrator | 2026-03-08 00:46:31 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:46:34.370122 | orchestrator | 2026-03-08 00:46:34 | INFO  | Task 9de37ddc-e3a5-4c24-9328-cc42b6b594d9 is in state STARTED 2026-03-08 00:46:34.371666 | orchestrator | 2026-03-08 00:46:34 | INFO  | Task 90275470-a5b4-491a-874f-4bf51d7bc505 is in state STARTED 2026-03-08 00:46:34.373863 | orchestrator | 2026-03-08 00:46:34 | INFO  | Task 43be1861-786a-45a2-9d43-f8a294d16f84 is in state SUCCESS 2026-03-08 00:46:34.375153 | orchestrator | 2026-03-08 00:46:34.375181 | orchestrator | 2026-03-08 00:46:34.375190 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-08 00:46:34.375197 | orchestrator | 2026-03-08 00:46:34.375203 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-08 00:46:34.375210 | orchestrator | Sunday 08 March 2026 00:44:53 +0000 (0:00:00.852) 0:00:00.852 ********** 2026-03-08 00:46:34.375216 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-03-08 00:46:34.375223 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-03-08 00:46:34.375230 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-03-08 00:46:34.375236 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-03-08 00:46:34.375243 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-03-08 00:46:34.375249 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-03-08 00:46:34.375256 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-03-08 00:46:34.375262 | orchestrator | 2026-03-08 00:46:34.375269 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-03-08 00:46:34.375275 | orchestrator | 2026-03-08 00:46:34.375281 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-03-08 00:46:34.375288 | orchestrator | Sunday 08 March 2026 00:44:56 +0000 (0:00:02.286) 0:00:03.139 ********** 2026-03-08 00:46:34.375303 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 00:46:34.375336 | orchestrator | 2026-03-08 00:46:34.375344 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-03-08 00:46:34.375352 | orchestrator | Sunday 08 March 2026 00:44:57 +0000 (0:00:01.705) 0:00:04.844 ********** 2026-03-08 00:46:34.375359 | orchestrator | ok: [testbed-manager] 2026-03-08 00:46:34.375367 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:46:34.375374 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:46:34.375380 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:46:34.375387 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:46:34.375394 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:46:34.375400 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:46:34.375407 | orchestrator | 2026-03-08 00:46:34.375414 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-03-08 00:46:34.375420 | orchestrator | Sunday 08 March 2026 00:45:00 +0000 (0:00:02.392) 0:00:07.237 ********** 2026-03-08 00:46:34.375427 | orchestrator | ok: [testbed-manager] 2026-03-08 00:46:34.375433 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:46:34.375440 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:46:34.375446 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:46:34.375453 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:46:34.375459 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:46:34.375465 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:46:34.375472 | orchestrator | 2026-03-08 00:46:34.375480 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-03-08 00:46:34.375486 | orchestrator | Sunday 08 March 2026 00:45:03 +0000 (0:00:03.595) 0:00:10.833 ********** 2026-03-08 00:46:34.375493 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:46:34.375500 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:46:34.375506 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:46:34.375512 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:46:34.375519 | orchestrator | changed: [testbed-manager] 2026-03-08 00:46:34.375526 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:46:34.375533 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:46:34.375540 | orchestrator | 2026-03-08 00:46:34.375546 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-03-08 00:46:34.375553 | orchestrator | Sunday 08 March 2026 00:45:08 +0000 (0:00:04.110) 0:00:14.944 ********** 2026-03-08 00:46:34.375559 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:46:34.375566 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:46:34.375572 | orchestrator | changed: [testbed-manager] 2026-03-08 00:46:34.375579 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:46:34.375585 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:46:34.375592 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:46:34.375599 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:46:34.375605 | orchestrator | 2026-03-08 00:46:34.375611 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-03-08 00:46:34.375618 | orchestrator | Sunday 08 March 2026 00:45:19 +0000 (0:00:11.015) 0:00:25.960 ********** 2026-03-08 00:46:34.375625 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:46:34.375631 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:46:34.375638 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:46:34.375644 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:46:34.375651 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:46:34.375657 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:46:34.375664 | orchestrator | changed: [testbed-manager] 2026-03-08 00:46:34.375670 | orchestrator | 2026-03-08 00:46:34.375677 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-03-08 00:46:34.375683 | orchestrator | Sunday 08 March 2026 00:46:01 +0000 (0:00:42.304) 0:01:08.264 ********** 2026-03-08 00:46:34.375690 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 00:46:34.375698 | orchestrator | 2026-03-08 00:46:34.375705 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-03-08 00:46:34.375718 | orchestrator | Sunday 08 March 2026 00:46:03 +0000 (0:00:01.669) 0:01:09.933 ********** 2026-03-08 00:46:34.375725 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-03-08 00:46:34.375731 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-03-08 00:46:34.375735 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-03-08 00:46:34.375739 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-03-08 00:46:34.375750 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-03-08 00:46:34.375770 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-03-08 00:46:34.375775 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-03-08 00:46:34.375780 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-03-08 00:46:34.375784 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-03-08 00:46:34.375788 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-03-08 00:46:34.375792 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-03-08 00:46:34.375796 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-03-08 00:46:34.375801 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-03-08 00:46:34.375805 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-03-08 00:46:34.375809 | orchestrator | 2026-03-08 00:46:34.375814 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-03-08 00:46:34.375819 | orchestrator | Sunday 08 March 2026 00:46:08 +0000 (0:00:05.766) 0:01:15.699 ********** 2026-03-08 00:46:34.375825 | orchestrator | ok: [testbed-manager] 2026-03-08 00:46:34.375831 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:46:34.375838 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:46:34.375843 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:46:34.375850 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:46:34.375856 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:46:34.375863 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:46:34.375870 | orchestrator | 2026-03-08 00:46:34.375876 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-03-08 00:46:34.375882 | orchestrator | Sunday 08 March 2026 00:46:10 +0000 (0:00:01.348) 0:01:17.048 ********** 2026-03-08 00:46:34.375888 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:46:34.375895 | orchestrator | changed: [testbed-manager] 2026-03-08 00:46:34.375901 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:46:34.375907 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:46:34.375913 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:46:34.375919 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:46:34.375926 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:46:34.375932 | orchestrator | 2026-03-08 00:46:34.375938 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-03-08 00:46:34.375945 | orchestrator | Sunday 08 March 2026 00:46:12 +0000 (0:00:02.010) 0:01:19.058 ********** 2026-03-08 00:46:34.375951 | orchestrator | ok: [testbed-manager] 2026-03-08 00:46:34.375958 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:46:34.375964 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:46:34.375971 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:46:34.375978 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:46:34.375985 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:46:34.375991 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:46:34.375998 | orchestrator | 2026-03-08 00:46:34.376004 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-03-08 00:46:34.376010 | orchestrator | Sunday 08 March 2026 00:46:13 +0000 (0:00:01.678) 0:01:20.737 ********** 2026-03-08 00:46:34.376017 | orchestrator | ok: [testbed-manager] 2026-03-08 00:46:34.376023 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:46:34.376029 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:46:34.376073 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:46:34.376082 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:46:34.376089 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:46:34.376101 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:46:34.376108 | orchestrator | 2026-03-08 00:46:34.376115 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-03-08 00:46:34.376122 | orchestrator | Sunday 08 March 2026 00:46:16 +0000 (0:00:02.438) 0:01:23.175 ********** 2026-03-08 00:46:34.376128 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-03-08 00:46:34.376140 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 00:46:34.376148 | orchestrator | 2026-03-08 00:46:34.376154 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-03-08 00:46:34.376160 | orchestrator | Sunday 08 March 2026 00:46:17 +0000 (0:00:01.379) 0:01:24.555 ********** 2026-03-08 00:46:34.376166 | orchestrator | changed: [testbed-manager] 2026-03-08 00:46:34.376173 | orchestrator | 2026-03-08 00:46:34.376179 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-03-08 00:46:34.376185 | orchestrator | Sunday 08 March 2026 00:46:20 +0000 (0:00:03.155) 0:01:27.710 ********** 2026-03-08 00:46:34.376192 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:46:34.376198 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:46:34.376204 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:46:34.376210 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:46:34.376216 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:46:34.376222 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:46:34.376229 | orchestrator | changed: [testbed-manager] 2026-03-08 00:46:34.376235 | orchestrator | 2026-03-08 00:46:34.376242 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:46:34.376248 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:46:34.376255 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:46:34.376261 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:46:34.376268 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:46:34.376281 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:46:34.376287 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:46:34.376291 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:46:34.376294 | orchestrator | 2026-03-08 00:46:34.376300 | orchestrator | 2026-03-08 00:46:34.376307 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 00:46:34.376313 | orchestrator | Sunday 08 March 2026 00:46:32 +0000 (0:00:11.658) 0:01:39.368 ********** 2026-03-08 00:46:34.376319 | orchestrator | =============================================================================== 2026-03-08 00:46:34.376326 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 42.30s 2026-03-08 00:46:34.376332 | orchestrator | osism.services.netdata : Restart service netdata ----------------------- 11.66s 2026-03-08 00:46:34.376339 | orchestrator | osism.services.netdata : Add repository -------------------------------- 11.02s 2026-03-08 00:46:34.376345 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 5.77s 2026-03-08 00:46:34.376351 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 4.11s 2026-03-08 00:46:34.376362 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.60s 2026-03-08 00:46:34.376368 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 3.16s 2026-03-08 00:46:34.376375 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.44s 2026-03-08 00:46:34.376381 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.39s 2026-03-08 00:46:34.376387 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.29s 2026-03-08 00:46:34.376394 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 2.01s 2026-03-08 00:46:34.376399 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.71s 2026-03-08 00:46:34.376403 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.68s 2026-03-08 00:46:34.376406 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.67s 2026-03-08 00:46:34.376410 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.38s 2026-03-08 00:46:34.376414 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.35s 2026-03-08 00:46:34.376905 | orchestrator | 2026-03-08 00:46:34 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:46:34.377465 | orchestrator | 2026-03-08 00:46:34 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:46:37.436430 | orchestrator | 2026-03-08 00:46:37 | INFO  | Task 9de37ddc-e3a5-4c24-9328-cc42b6b594d9 is in state STARTED 2026-03-08 00:46:37.440516 | orchestrator | 2026-03-08 00:46:37 | INFO  | Task 90275470-a5b4-491a-874f-4bf51d7bc505 is in state STARTED 2026-03-08 00:46:37.441575 | orchestrator | 2026-03-08 00:46:37 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:46:37.441626 | orchestrator | 2026-03-08 00:46:37 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:46:40.478509 | orchestrator | 2026-03-08 00:46:40 | INFO  | Task 9de37ddc-e3a5-4c24-9328-cc42b6b594d9 is in state STARTED 2026-03-08 00:46:40.480557 | orchestrator | 2026-03-08 00:46:40 | INFO  | Task 90275470-a5b4-491a-874f-4bf51d7bc505 is in state STARTED 2026-03-08 00:46:40.481890 | orchestrator | 2026-03-08 00:46:40 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:46:40.481950 | orchestrator | 2026-03-08 00:46:40 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:46:43.516948 | orchestrator | 2026-03-08 00:46:43 | INFO  | Task 9de37ddc-e3a5-4c24-9328-cc42b6b594d9 is in state STARTED 2026-03-08 00:46:43.517914 | orchestrator | 2026-03-08 00:46:43 | INFO  | Task 90275470-a5b4-491a-874f-4bf51d7bc505 is in state STARTED 2026-03-08 00:46:43.519758 | orchestrator | 2026-03-08 00:46:43 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:46:43.519883 | orchestrator | 2026-03-08 00:46:43 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:46:46.554138 | orchestrator | 2026-03-08 00:46:46 | INFO  | Task 9de37ddc-e3a5-4c24-9328-cc42b6b594d9 is in state STARTED 2026-03-08 00:46:46.556061 | orchestrator | 2026-03-08 00:46:46 | INFO  | Task 90275470-a5b4-491a-874f-4bf51d7bc505 is in state STARTED 2026-03-08 00:46:46.557503 | orchestrator | 2026-03-08 00:46:46 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:46:46.557530 | orchestrator | 2026-03-08 00:46:46 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:46:49.621264 | orchestrator | 2026-03-08 00:46:49 | INFO  | Task 9de37ddc-e3a5-4c24-9328-cc42b6b594d9 is in state STARTED 2026-03-08 00:46:49.625826 | orchestrator | 2026-03-08 00:46:49 | INFO  | Task 90275470-a5b4-491a-874f-4bf51d7bc505 is in state STARTED 2026-03-08 00:46:49.631103 | orchestrator | 2026-03-08 00:46:49 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:46:49.631155 | orchestrator | 2026-03-08 00:46:49 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:46:52.679426 | orchestrator | 2026-03-08 00:46:52 | INFO  | Task 9de37ddc-e3a5-4c24-9328-cc42b6b594d9 is in state STARTED 2026-03-08 00:46:52.681735 | orchestrator | 2026-03-08 00:46:52 | INFO  | Task 90275470-a5b4-491a-874f-4bf51d7bc505 is in state STARTED 2026-03-08 00:46:52.685374 | orchestrator | 2026-03-08 00:46:52 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:46:52.685487 | orchestrator | 2026-03-08 00:46:52 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:46:55.720628 | orchestrator | 2026-03-08 00:46:55 | INFO  | Task 9de37ddc-e3a5-4c24-9328-cc42b6b594d9 is in state STARTED 2026-03-08 00:46:55.722702 | orchestrator | 2026-03-08 00:46:55 | INFO  | Task 90275470-a5b4-491a-874f-4bf51d7bc505 is in state STARTED 2026-03-08 00:46:55.723977 | orchestrator | 2026-03-08 00:46:55 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:46:55.724082 | orchestrator | 2026-03-08 00:46:55 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:46:58.764803 | orchestrator | 2026-03-08 00:46:58 | INFO  | Task 9de37ddc-e3a5-4c24-9328-cc42b6b594d9 is in state STARTED 2026-03-08 00:46:58.765857 | orchestrator | 2026-03-08 00:46:58 | INFO  | Task 90275470-a5b4-491a-874f-4bf51d7bc505 is in state STARTED 2026-03-08 00:46:58.766967 | orchestrator | 2026-03-08 00:46:58 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:46:58.766996 | orchestrator | 2026-03-08 00:46:58 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:47:01.804245 | orchestrator | 2026-03-08 00:47:01 | INFO  | Task 9de37ddc-e3a5-4c24-9328-cc42b6b594d9 is in state STARTED 2026-03-08 00:47:01.804801 | orchestrator | 2026-03-08 00:47:01 | INFO  | Task 90275470-a5b4-491a-874f-4bf51d7bc505 is in state STARTED 2026-03-08 00:47:01.805714 | orchestrator | 2026-03-08 00:47:01 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:47:01.805748 | orchestrator | 2026-03-08 00:47:01 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:47:04.848983 | orchestrator | 2026-03-08 00:47:04.849096 | orchestrator | 2026-03-08 00:47:04 | INFO  | Task 9de37ddc-e3a5-4c24-9328-cc42b6b594d9 is in state SUCCESS 2026-03-08 00:47:04.850838 | orchestrator | 2026-03-08 00:47:04.850898 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-03-08 00:47:04.850907 | orchestrator | 2026-03-08 00:47:04.850914 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-08 00:47:04.850921 | orchestrator | Sunday 08 March 2026 00:44:45 +0000 (0:00:00.223) 0:00:00.224 ********** 2026-03-08 00:47:04.850934 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 00:47:04.850942 | orchestrator | 2026-03-08 00:47:04.850949 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-03-08 00:47:04.850955 | orchestrator | Sunday 08 March 2026 00:44:47 +0000 (0:00:01.331) 0:00:01.556 ********** 2026-03-08 00:47:04.850997 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-08 00:47:04.851004 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-08 00:47:04.851030 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-08 00:47:04.851037 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-08 00:47:04.851043 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-08 00:47:04.851068 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-08 00:47:04.851075 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-08 00:47:04.851081 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-08 00:47:04.851087 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-08 00:47:04.851093 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-08 00:47:04.851099 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-08 00:47:04.851107 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-08 00:47:04.851114 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-08 00:47:04.851120 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-08 00:47:04.851127 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-08 00:47:04.851133 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-08 00:47:04.851139 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-08 00:47:04.851145 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-08 00:47:04.851152 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-08 00:47:04.851158 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-08 00:47:04.851165 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-08 00:47:04.851170 | orchestrator | 2026-03-08 00:47:04.851176 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-08 00:47:04.851182 | orchestrator | Sunday 08 March 2026 00:44:50 +0000 (0:00:03.756) 0:00:05.312 ********** 2026-03-08 00:47:04.851189 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 00:47:04.851197 | orchestrator | 2026-03-08 00:47:04.851203 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-03-08 00:47:04.851209 | orchestrator | Sunday 08 March 2026 00:44:52 +0000 (0:00:01.285) 0:00:06.598 ********** 2026-03-08 00:47:04.851220 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-08 00:47:04.851229 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-08 00:47:04.851261 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-08 00:47:04.851277 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-08 00:47:04.851284 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-08 00:47:04.851306 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-08 00:47:04.851313 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-08 00:47:04.851320 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:04.851328 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:04.851350 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:04.851365 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:04.851373 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:04.851380 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:04.851392 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:04.851399 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:04.851406 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:04.851413 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:04.851429 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:04.851439 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:04.851445 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:04.851451 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:04.851458 | orchestrator | 2026-03-08 00:47:04.851464 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-03-08 00:47:04.851472 | orchestrator | Sunday 08 March 2026 00:44:56 +0000 (0:00:04.284) 0:00:10.883 ********** 2026-03-08 00:47:04.851480 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-08 00:47:04.851487 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:47:04.851493 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:47:04.851499 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:47:04.851507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-08 00:47:04.851528 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:47:04.851535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:47:04.851542 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:47:04.851552 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-08 00:47:04.851571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:47:04.851578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:47:04.851583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-08 00:47:04.851589 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:47:04.851601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:47:04.851607 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:47:04.851614 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:47:04.851627 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-08 00:47:04.851634 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:47:04.851641 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:47:04.851647 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:47:04.851654 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-08 00:47:04.851660 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:47:04.851667 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:47:04.851678 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:47:04.851684 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-08 00:47:04.851696 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:47:04.851705 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:47:04.851712 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:47:04.851718 | orchestrator | 2026-03-08 00:47:04.851725 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-03-08 00:47:04.851732 | orchestrator | Sunday 08 March 2026 00:44:57 +0000 (0:00:01.540) 0:00:12.424 ********** 2026-03-08 00:47:04.851739 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-08 00:47:04.851746 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:47:04.851752 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:47:04.851759 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:47:04.851765 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-08 00:47:04.851776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:47:04.851783 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:47:04.851789 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:47:04.851807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-08 00:47:04.851814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:47:04.851820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:47:04.851827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-08 00:47:04.851834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:47:04.851846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:47:04.851853 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-08 00:47:04.851860 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:47:04.852083 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:47:04.852105 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:47:04.852112 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:47:04.852118 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:47:04.852124 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-08 00:47:04.852131 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:47:04.852137 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:47:04.852144 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:47:04.852157 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-08 00:47:04.852164 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:47:04.852171 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:47:04.852177 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:47:04.852183 | orchestrator | 2026-03-08 00:47:04.852189 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-03-08 00:47:04.852196 | orchestrator | Sunday 08 March 2026 00:45:00 +0000 (0:00:02.412) 0:00:14.836 ********** 2026-03-08 00:47:04.852202 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:47:04.852208 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:47:04.852215 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:47:04.852221 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:47:04.852227 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:47:04.852239 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:47:04.852251 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:47:04.852258 | orchestrator | 2026-03-08 00:47:04.852264 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-03-08 00:47:04.852270 | orchestrator | Sunday 08 March 2026 00:45:02 +0000 (0:00:01.622) 0:00:16.459 ********** 2026-03-08 00:47:04.852277 | orchestrator | skipping: [testbed-manager] 2026-03-08 00:47:04.852283 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:47:04.852294 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:47:04.852300 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:47:04.852306 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:47:04.852312 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:47:04.852318 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:47:04.852323 | orchestrator | 2026-03-08 00:47:04.852330 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-03-08 00:47:04.852335 | orchestrator | Sunday 08 March 2026 00:45:04 +0000 (0:00:02.008) 0:00:18.467 ********** 2026-03-08 00:47:04.852342 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-08 00:47:04.852349 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-08 00:47:04.852363 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-08 00:47:04.852369 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-08 00:47:04.852376 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:04.852383 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-08 00:47:04.852395 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-08 00:47:04.852405 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-08 00:47:04.852433 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:04.852444 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:04.852450 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:04.852456 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:04.852463 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:04.852473 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:04.852479 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:04.852486 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:04.852498 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:04.852503 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:04.852507 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:04.852511 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:04.852515 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:04.852518 | orchestrator | 2026-03-08 00:47:04.852522 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-03-08 00:47:04.852526 | orchestrator | Sunday 08 March 2026 00:45:11 +0000 (0:00:07.502) 0:00:25.970 ********** 2026-03-08 00:47:04.852530 | orchestrator | [WARNING]: Skipped 2026-03-08 00:47:04.852535 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-03-08 00:47:04.852539 | orchestrator | to this access issue: 2026-03-08 00:47:04.852543 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-03-08 00:47:04.852546 | orchestrator | directory 2026-03-08 00:47:04.852551 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-08 00:47:04.852554 | orchestrator | 2026-03-08 00:47:04.852562 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-03-08 00:47:04.852566 | orchestrator | Sunday 08 March 2026 00:45:13 +0000 (0:00:01.529) 0:00:27.499 ********** 2026-03-08 00:47:04.852569 | orchestrator | [WARNING]: Skipped 2026-03-08 00:47:04.852573 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-03-08 00:47:04.852579 | orchestrator | to this access issue: 2026-03-08 00:47:04.852583 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-03-08 00:47:04.852587 | orchestrator | directory 2026-03-08 00:47:04.852591 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-08 00:47:04.852598 | orchestrator | 2026-03-08 00:47:04.852602 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-03-08 00:47:04.852606 | orchestrator | Sunday 08 March 2026 00:45:14 +0000 (0:00:01.003) 0:00:28.502 ********** 2026-03-08 00:47:04.852612 | orchestrator | [WARNING]: Skipped 2026-03-08 00:47:04.852616 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-03-08 00:47:04.852619 | orchestrator | to this access issue: 2026-03-08 00:47:04.852623 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-03-08 00:47:04.852627 | orchestrator | directory 2026-03-08 00:47:04.852631 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-08 00:47:04.852634 | orchestrator | 2026-03-08 00:47:04.852638 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-03-08 00:47:04.852642 | orchestrator | Sunday 08 March 2026 00:45:14 +0000 (0:00:00.929) 0:00:29.431 ********** 2026-03-08 00:47:04.852646 | orchestrator | [WARNING]: Skipped 2026-03-08 00:47:04.852649 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-03-08 00:47:04.852653 | orchestrator | to this access issue: 2026-03-08 00:47:04.852657 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-03-08 00:47:04.852660 | orchestrator | directory 2026-03-08 00:47:04.852664 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-08 00:47:04.852668 | orchestrator | 2026-03-08 00:47:04.852672 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-03-08 00:47:04.852675 | orchestrator | Sunday 08 March 2026 00:45:15 +0000 (0:00:00.876) 0:00:30.308 ********** 2026-03-08 00:47:04.852679 | orchestrator | changed: [testbed-manager] 2026-03-08 00:47:04.852683 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:47:04.852687 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:47:04.852690 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:47:04.852694 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:47:04.852698 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:47:04.852701 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:47:04.852705 | orchestrator | 2026-03-08 00:47:04.852709 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-03-08 00:47:04.852713 | orchestrator | Sunday 08 March 2026 00:45:19 +0000 (0:00:03.320) 0:00:33.629 ********** 2026-03-08 00:47:04.852717 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-08 00:47:04.852721 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-08 00:47:04.852725 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-08 00:47:04.852729 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-08 00:47:04.852732 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-08 00:47:04.852736 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-08 00:47:04.852740 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-08 00:47:04.852743 | orchestrator | 2026-03-08 00:47:04.852747 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-03-08 00:47:04.852751 | orchestrator | Sunday 08 March 2026 00:45:23 +0000 (0:00:04.679) 0:00:38.309 ********** 2026-03-08 00:47:04.852755 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:47:04.852759 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:47:04.852762 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:47:04.852766 | orchestrator | changed: [testbed-manager] 2026-03-08 00:47:04.852770 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:47:04.852773 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:47:04.852777 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:47:04.852784 | orchestrator | 2026-03-08 00:47:04.852788 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-03-08 00:47:04.852792 | orchestrator | Sunday 08 March 2026 00:45:27 +0000 (0:00:03.502) 0:00:41.811 ********** 2026-03-08 00:47:04.852796 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-08 00:47:04.852806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:47:04.852815 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-08 00:47:04.852821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:47:04.852828 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:04.852835 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-08 00:47:04.852841 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:47:04.852852 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-08 00:47:04.852858 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:47:04.852868 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-08 00:47:04.852877 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:47:04.852884 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:04.852891 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:04.852898 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:04.852905 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:04.852916 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-08 00:47:04.852923 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:47:04.852934 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-08 00:47:04.852946 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:47:04.852954 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:04.852960 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:04.852967 | orchestrator | 2026-03-08 00:47:04.852974 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-03-08 00:47:04.852981 | orchestrator | Sunday 08 March 2026 00:45:30 +0000 (0:00:03.250) 0:00:45.062 ********** 2026-03-08 00:47:04.852987 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-08 00:47:04.852994 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-08 00:47:04.853001 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-08 00:47:04.853037 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-08 00:47:04.853044 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-08 00:47:04.853051 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-08 00:47:04.853059 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-08 00:47:04.853065 | orchestrator | 2026-03-08 00:47:04.853072 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-03-08 00:47:04.853079 | orchestrator | Sunday 08 March 2026 00:45:33 +0000 (0:00:03.310) 0:00:48.372 ********** 2026-03-08 00:47:04.853087 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-08 00:47:04.853093 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-08 00:47:04.853100 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-08 00:47:04.853107 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-08 00:47:04.853114 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-08 00:47:04.853120 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-08 00:47:04.853127 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-08 00:47:04.853133 | orchestrator | 2026-03-08 00:47:04.853139 | orchestrator | TASK [common : Check common containers] **************************************** 2026-03-08 00:47:04.853146 | orchestrator | Sunday 08 March 2026 00:45:36 +0000 (0:00:02.601) 0:00:50.973 ********** 2026-03-08 00:47:04.853152 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-08 00:47:04.853375 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-08 00:47:04.853395 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-08 00:47:04.853403 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:04.853417 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:04.853423 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-08 00:47:04.853429 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:04.853435 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-08 00:47:04.853447 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:04.853457 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:04.853463 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-08 00:47:04.853470 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:04.853481 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:04.853487 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-08 00:47:04.853493 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:04.853499 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:04.853509 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:04.853518 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:04.853525 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:04.853536 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:04.853542 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:47:04.853548 | orchestrator | 2026-03-08 00:47:04.853554 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-03-08 00:47:04.853561 | orchestrator | Sunday 08 March 2026 00:45:41 +0000 (0:00:05.086) 0:00:56.060 ********** 2026-03-08 00:47:04.853567 | orchestrator | changed: [testbed-manager] 2026-03-08 00:47:04.853573 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:47:04.853579 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:47:04.853585 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:47:04.853591 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:47:04.853597 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:47:04.853603 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:47:04.853609 | orchestrator | 2026-03-08 00:47:04.853616 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-03-08 00:47:04.853622 | orchestrator | Sunday 08 March 2026 00:45:43 +0000 (0:00:02.143) 0:00:58.203 ********** 2026-03-08 00:47:04.853628 | orchestrator | changed: [testbed-manager] 2026-03-08 00:47:04.853635 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:47:04.853642 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:47:04.853648 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:47:04.853653 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:47:04.853659 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:47:04.853665 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:47:04.853672 | orchestrator | 2026-03-08 00:47:04.853677 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-08 00:47:04.853683 | orchestrator | Sunday 08 March 2026 00:45:45 +0000 (0:00:01.288) 0:00:59.492 ********** 2026-03-08 00:47:04.853689 | orchestrator | 2026-03-08 00:47:04.853695 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-08 00:47:04.853701 | orchestrator | Sunday 08 March 2026 00:45:45 +0000 (0:00:00.069) 0:00:59.562 ********** 2026-03-08 00:47:04.853707 | orchestrator | 2026-03-08 00:47:04.853713 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-08 00:47:04.853719 | orchestrator | Sunday 08 March 2026 00:45:45 +0000 (0:00:00.067) 0:00:59.630 ********** 2026-03-08 00:47:04.853726 | orchestrator | 2026-03-08 00:47:04.853732 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-08 00:47:04.853738 | orchestrator | Sunday 08 March 2026 00:45:45 +0000 (0:00:00.247) 0:00:59.878 ********** 2026-03-08 00:47:04.853744 | orchestrator | 2026-03-08 00:47:04.853751 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-08 00:47:04.853756 | orchestrator | Sunday 08 March 2026 00:45:45 +0000 (0:00:00.065) 0:00:59.944 ********** 2026-03-08 00:47:04.853763 | orchestrator | 2026-03-08 00:47:04.853769 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-08 00:47:04.853775 | orchestrator | Sunday 08 March 2026 00:45:45 +0000 (0:00:00.064) 0:01:00.008 ********** 2026-03-08 00:47:04.853782 | orchestrator | 2026-03-08 00:47:04.853788 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-08 00:47:04.853799 | orchestrator | Sunday 08 March 2026 00:45:45 +0000 (0:00:00.062) 0:01:00.071 ********** 2026-03-08 00:47:04.853813 | orchestrator | 2026-03-08 00:47:04.853819 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-03-08 00:47:04.853830 | orchestrator | Sunday 08 March 2026 00:45:45 +0000 (0:00:00.089) 0:01:00.160 ********** 2026-03-08 00:47:04.853836 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:47:04.853842 | orchestrator | changed: [testbed-manager] 2026-03-08 00:47:04.853848 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:47:04.853854 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:47:04.853860 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:47:04.853866 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:47:04.853871 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:47:04.853877 | orchestrator | 2026-03-08 00:47:04.853887 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-03-08 00:47:04.853893 | orchestrator | Sunday 08 March 2026 00:46:21 +0000 (0:00:35.841) 0:01:36.002 ********** 2026-03-08 00:47:04.853900 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:47:04.853906 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:47:04.853912 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:47:04.853918 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:47:04.853924 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:47:04.853931 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:47:04.853937 | orchestrator | changed: [testbed-manager] 2026-03-08 00:47:04.853943 | orchestrator | 2026-03-08 00:47:04.853949 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-03-08 00:47:04.853955 | orchestrator | Sunday 08 March 2026 00:46:56 +0000 (0:00:35.122) 0:02:11.124 ********** 2026-03-08 00:47:04.853961 | orchestrator | ok: [testbed-manager] 2026-03-08 00:47:04.853969 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:47:04.853976 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:47:04.853982 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:47:04.853988 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:47:04.853996 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:47:04.854002 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:47:04.854146 | orchestrator | 2026-03-08 00:47:04.854163 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-03-08 00:47:04.854169 | orchestrator | Sunday 08 March 2026 00:46:58 +0000 (0:00:02.184) 0:02:13.309 ********** 2026-03-08 00:47:04.854175 | orchestrator | changed: [testbed-manager] 2026-03-08 00:47:04.854182 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:47:04.854187 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:47:04.854193 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:47:04.854199 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:47:04.854204 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:47:04.854211 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:47:04.854217 | orchestrator | 2026-03-08 00:47:04.854223 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:47:04.854230 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-08 00:47:04.854237 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-08 00:47:04.854243 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-08 00:47:04.854249 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-08 00:47:04.854255 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-08 00:47:04.854261 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-08 00:47:04.854276 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-08 00:47:04.854282 | orchestrator | 2026-03-08 00:47:04.854289 | orchestrator | 2026-03-08 00:47:04.854295 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 00:47:04.854301 | orchestrator | Sunday 08 March 2026 00:47:03 +0000 (0:00:04.827) 0:02:18.136 ********** 2026-03-08 00:47:04.854308 | orchestrator | =============================================================================== 2026-03-08 00:47:04.854314 | orchestrator | common : Restart fluentd container ------------------------------------- 35.84s 2026-03-08 00:47:04.854320 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 35.12s 2026-03-08 00:47:04.854327 | orchestrator | common : Copying over config.json files for services -------------------- 7.50s 2026-03-08 00:47:04.854333 | orchestrator | common : Check common containers ---------------------------------------- 5.09s 2026-03-08 00:47:04.854340 | orchestrator | common : Restart cron container ----------------------------------------- 4.83s 2026-03-08 00:47:04.854346 | orchestrator | common : Copying over cron logrotate config file ------------------------ 4.68s 2026-03-08 00:47:04.854351 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.28s 2026-03-08 00:47:04.854357 | orchestrator | common : Ensuring config directories exist ------------------------------ 3.76s 2026-03-08 00:47:04.854364 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.50s 2026-03-08 00:47:04.854370 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 3.32s 2026-03-08 00:47:04.854375 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.31s 2026-03-08 00:47:04.854382 | orchestrator | common : Ensuring config directories have correct owner and permission --- 3.25s 2026-03-08 00:47:04.854388 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.60s 2026-03-08 00:47:04.854394 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.41s 2026-03-08 00:47:04.854408 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.18s 2026-03-08 00:47:04.854415 | orchestrator | common : Creating log volume -------------------------------------------- 2.14s 2026-03-08 00:47:04.854421 | orchestrator | common : Restart systemd-tmpfiles --------------------------------------- 2.01s 2026-03-08 00:47:04.854427 | orchestrator | common : Copying over /run subdirectories conf -------------------------- 1.62s 2026-03-08 00:47:04.854434 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.54s 2026-03-08 00:47:04.854440 | orchestrator | common : Find custom fluentd input config files ------------------------- 1.53s 2026-03-08 00:47:04.855686 | orchestrator | 2026-03-08 00:47:04 | INFO  | Task 90275470-a5b4-491a-874f-4bf51d7bc505 is in state STARTED 2026-03-08 00:47:04.857759 | orchestrator | 2026-03-08 00:47:04 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:47:04.857787 | orchestrator | 2026-03-08 00:47:04 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:47:07.890380 | orchestrator | 2026-03-08 00:47:07 | INFO  | Task 90275470-a5b4-491a-874f-4bf51d7bc505 is in state STARTED 2026-03-08 00:47:07.890705 | orchestrator | 2026-03-08 00:47:07 | INFO  | Task 7feadde9-19d2-4755-8329-7ba739e693e4 is in state STARTED 2026-03-08 00:47:07.892416 | orchestrator | 2026-03-08 00:47:07 | INFO  | Task 7eb81cce-a2fd-447d-8413-c70b803436d4 is in state STARTED 2026-03-08 00:47:07.893236 | orchestrator | 2026-03-08 00:47:07 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:47:07.894218 | orchestrator | 2026-03-08 00:47:07 | INFO  | Task 248d6bc3-13b3-4f0b-80a8-8dc4e88c4329 is in state STARTED 2026-03-08 00:47:07.894960 | orchestrator | 2026-03-08 00:47:07 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:47:07.895088 | orchestrator | 2026-03-08 00:47:07 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:47:10.922933 | orchestrator | 2026-03-08 00:47:10 | INFO  | Task 90275470-a5b4-491a-874f-4bf51d7bc505 is in state STARTED 2026-03-08 00:47:10.923244 | orchestrator | 2026-03-08 00:47:10 | INFO  | Task 7feadde9-19d2-4755-8329-7ba739e693e4 is in state STARTED 2026-03-08 00:47:10.923845 | orchestrator | 2026-03-08 00:47:10 | INFO  | Task 7eb81cce-a2fd-447d-8413-c70b803436d4 is in state STARTED 2026-03-08 00:47:10.924544 | orchestrator | 2026-03-08 00:47:10 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:47:10.925206 | orchestrator | 2026-03-08 00:47:10 | INFO  | Task 248d6bc3-13b3-4f0b-80a8-8dc4e88c4329 is in state STARTED 2026-03-08 00:47:10.926183 | orchestrator | 2026-03-08 00:47:10 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:47:10.926224 | orchestrator | 2026-03-08 00:47:10 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:47:13.947521 | orchestrator | 2026-03-08 00:47:13 | INFO  | Task 90275470-a5b4-491a-874f-4bf51d7bc505 is in state STARTED 2026-03-08 00:47:13.947882 | orchestrator | 2026-03-08 00:47:13 | INFO  | Task 7feadde9-19d2-4755-8329-7ba739e693e4 is in state STARTED 2026-03-08 00:47:13.949100 | orchestrator | 2026-03-08 00:47:13 | INFO  | Task 7eb81cce-a2fd-447d-8413-c70b803436d4 is in state STARTED 2026-03-08 00:47:13.950106 | orchestrator | 2026-03-08 00:47:13 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:47:13.951374 | orchestrator | 2026-03-08 00:47:13 | INFO  | Task 248d6bc3-13b3-4f0b-80a8-8dc4e88c4329 is in state STARTED 2026-03-08 00:47:13.952313 | orchestrator | 2026-03-08 00:47:13 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:47:13.952340 | orchestrator | 2026-03-08 00:47:13 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:47:17.002394 | orchestrator | 2026-03-08 00:47:17 | INFO  | Task 90275470-a5b4-491a-874f-4bf51d7bc505 is in state STARTED 2026-03-08 00:47:17.003102 | orchestrator | 2026-03-08 00:47:17 | INFO  | Task 7feadde9-19d2-4755-8329-7ba739e693e4 is in state STARTED 2026-03-08 00:47:17.004099 | orchestrator | 2026-03-08 00:47:17 | INFO  | Task 7eb81cce-a2fd-447d-8413-c70b803436d4 is in state STARTED 2026-03-08 00:47:17.005559 | orchestrator | 2026-03-08 00:47:17 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:47:17.006663 | orchestrator | 2026-03-08 00:47:17 | INFO  | Task 248d6bc3-13b3-4f0b-80a8-8dc4e88c4329 is in state STARTED 2026-03-08 00:47:17.007877 | orchestrator | 2026-03-08 00:47:17 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:47:17.007914 | orchestrator | 2026-03-08 00:47:17 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:47:20.094224 | orchestrator | 2026-03-08 00:47:20 | INFO  | Task 90275470-a5b4-491a-874f-4bf51d7bc505 is in state STARTED 2026-03-08 00:47:20.095182 | orchestrator | 2026-03-08 00:47:20 | INFO  | Task 7feadde9-19d2-4755-8329-7ba739e693e4 is in state STARTED 2026-03-08 00:47:20.096404 | orchestrator | 2026-03-08 00:47:20 | INFO  | Task 7eb81cce-a2fd-447d-8413-c70b803436d4 is in state STARTED 2026-03-08 00:47:20.096660 | orchestrator | 2026-03-08 00:47:20 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:47:20.098114 | orchestrator | 2026-03-08 00:47:20 | INFO  | Task 248d6bc3-13b3-4f0b-80a8-8dc4e88c4329 is in state STARTED 2026-03-08 00:47:20.098649 | orchestrator | 2026-03-08 00:47:20 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:47:20.098711 | orchestrator | 2026-03-08 00:47:20 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:47:23.304938 | orchestrator | 2026-03-08 00:47:23 | INFO  | Task 90275470-a5b4-491a-874f-4bf51d7bc505 is in state STARTED 2026-03-08 00:47:23.305110 | orchestrator | 2026-03-08 00:47:23 | INFO  | Task 7feadde9-19d2-4755-8329-7ba739e693e4 is in state STARTED 2026-03-08 00:47:23.305128 | orchestrator | 2026-03-08 00:47:23 | INFO  | Task 7eb81cce-a2fd-447d-8413-c70b803436d4 is in state STARTED 2026-03-08 00:47:23.305140 | orchestrator | 2026-03-08 00:47:23 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:47:23.305150 | orchestrator | 2026-03-08 00:47:23 | INFO  | Task 248d6bc3-13b3-4f0b-80a8-8dc4e88c4329 is in state SUCCESS 2026-03-08 00:47:23.305161 | orchestrator | 2026-03-08 00:47:23 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:47:23.305173 | orchestrator | 2026-03-08 00:47:23 | INFO  | Task 0090c2b7-ff53-4ec8-8b6d-3acf888e2829 is in state STARTED 2026-03-08 00:47:23.305184 | orchestrator | 2026-03-08 00:47:23 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:47:26.342209 | orchestrator | 2026-03-08 00:47:26 | INFO  | Task 90275470-a5b4-491a-874f-4bf51d7bc505 is in state STARTED 2026-03-08 00:47:26.344954 | orchestrator | 2026-03-08 00:47:26 | INFO  | Task 7feadde9-19d2-4755-8329-7ba739e693e4 is in state STARTED 2026-03-08 00:47:26.345527 | orchestrator | 2026-03-08 00:47:26 | INFO  | Task 7eb81cce-a2fd-447d-8413-c70b803436d4 is in state STARTED 2026-03-08 00:47:26.346306 | orchestrator | 2026-03-08 00:47:26 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:47:26.347195 | orchestrator | 2026-03-08 00:47:26 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:47:26.347833 | orchestrator | 2026-03-08 00:47:26 | INFO  | Task 0090c2b7-ff53-4ec8-8b6d-3acf888e2829 is in state STARTED 2026-03-08 00:47:26.347870 | orchestrator | 2026-03-08 00:47:26 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:47:29.382669 | orchestrator | 2026-03-08 00:47:29 | INFO  | Task 90275470-a5b4-491a-874f-4bf51d7bc505 is in state STARTED 2026-03-08 00:47:29.383042 | orchestrator | 2026-03-08 00:47:29 | INFO  | Task 7feadde9-19d2-4755-8329-7ba739e693e4 is in state STARTED 2026-03-08 00:47:29.383873 | orchestrator | 2026-03-08 00:47:29 | INFO  | Task 7eb81cce-a2fd-447d-8413-c70b803436d4 is in state STARTED 2026-03-08 00:47:29.384756 | orchestrator | 2026-03-08 00:47:29 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:47:29.385776 | orchestrator | 2026-03-08 00:47:29 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:47:29.386591 | orchestrator | 2026-03-08 00:47:29 | INFO  | Task 0090c2b7-ff53-4ec8-8b6d-3acf888e2829 is in state STARTED 2026-03-08 00:47:29.386653 | orchestrator | 2026-03-08 00:47:29 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:47:32.422152 | orchestrator | 2026-03-08 00:47:32 | INFO  | Task 90275470-a5b4-491a-874f-4bf51d7bc505 is in state STARTED 2026-03-08 00:47:32.422249 | orchestrator | 2026-03-08 00:47:32 | INFO  | Task 7feadde9-19d2-4755-8329-7ba739e693e4 is in state STARTED 2026-03-08 00:47:32.422259 | orchestrator | 2026-03-08 00:47:32 | INFO  | Task 7eb81cce-a2fd-447d-8413-c70b803436d4 is in state STARTED 2026-03-08 00:47:32.424154 | orchestrator | 2026-03-08 00:47:32 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:47:32.428034 | orchestrator | 2026-03-08 00:47:32 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:47:32.428103 | orchestrator | 2026-03-08 00:47:32 | INFO  | Task 0090c2b7-ff53-4ec8-8b6d-3acf888e2829 is in state STARTED 2026-03-08 00:47:32.428110 | orchestrator | 2026-03-08 00:47:32 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:47:35.500594 | orchestrator | 2026-03-08 00:47:35 | INFO  | Task 90275470-a5b4-491a-874f-4bf51d7bc505 is in state STARTED 2026-03-08 00:47:35.500689 | orchestrator | 2026-03-08 00:47:35 | INFO  | Task 7feadde9-19d2-4755-8329-7ba739e693e4 is in state STARTED 2026-03-08 00:47:35.500718 | orchestrator | 2026-03-08 00:47:35 | INFO  | Task 7eb81cce-a2fd-447d-8413-c70b803436d4 is in state STARTED 2026-03-08 00:47:35.500725 | orchestrator | 2026-03-08 00:47:35 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:47:35.501090 | orchestrator | 2026-03-08 00:47:35 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:47:35.501928 | orchestrator | 2026-03-08 00:47:35 | INFO  | Task 0090c2b7-ff53-4ec8-8b6d-3acf888e2829 is in state STARTED 2026-03-08 00:47:35.502133 | orchestrator | 2026-03-08 00:47:35 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:47:38.533407 | orchestrator | 2026-03-08 00:47:38 | INFO  | Task 90275470-a5b4-491a-874f-4bf51d7bc505 is in state STARTED 2026-03-08 00:47:38.534404 | orchestrator | 2026-03-08 00:47:38 | INFO  | Task 7feadde9-19d2-4755-8329-7ba739e693e4 is in state STARTED 2026-03-08 00:47:38.535332 | orchestrator | 2026-03-08 00:47:38 | INFO  | Task 7eb81cce-a2fd-447d-8413-c70b803436d4 is in state STARTED 2026-03-08 00:47:38.536530 | orchestrator | 2026-03-08 00:47:38 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:47:38.538134 | orchestrator | 2026-03-08 00:47:38 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:47:38.539123 | orchestrator | 2026-03-08 00:47:38 | INFO  | Task 0090c2b7-ff53-4ec8-8b6d-3acf888e2829 is in state STARTED 2026-03-08 00:47:38.539263 | orchestrator | 2026-03-08 00:47:38 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:47:41.582111 | orchestrator | 2026-03-08 00:47:41 | INFO  | Task 90275470-a5b4-491a-874f-4bf51d7bc505 is in state STARTED 2026-03-08 00:47:41.582663 | orchestrator | 2026-03-08 00:47:41 | INFO  | Task 7feadde9-19d2-4755-8329-7ba739e693e4 is in state SUCCESS 2026-03-08 00:47:41.583916 | orchestrator | 2026-03-08 00:47:41.583993 | orchestrator | 2026-03-08 00:47:41.584005 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-08 00:47:41.584014 | orchestrator | 2026-03-08 00:47:41.584020 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-08 00:47:41.584029 | orchestrator | Sunday 08 March 2026 00:47:10 +0000 (0:00:00.586) 0:00:00.586 ********** 2026-03-08 00:47:41.584038 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:47:41.584047 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:47:41.584053 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:47:41.584059 | orchestrator | 2026-03-08 00:47:41.584066 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-08 00:47:41.584072 | orchestrator | Sunday 08 March 2026 00:47:10 +0000 (0:00:00.475) 0:00:01.061 ********** 2026-03-08 00:47:41.584079 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-03-08 00:47:41.584087 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-03-08 00:47:41.584093 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-03-08 00:47:41.584099 | orchestrator | 2026-03-08 00:47:41.584104 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-03-08 00:47:41.584110 | orchestrator | 2026-03-08 00:47:41.584116 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-03-08 00:47:41.584150 | orchestrator | Sunday 08 March 2026 00:47:11 +0000 (0:00:00.549) 0:00:01.611 ********** 2026-03-08 00:47:41.584155 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:47:41.584160 | orchestrator | 2026-03-08 00:47:41.584164 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-03-08 00:47:41.584168 | orchestrator | Sunday 08 March 2026 00:47:12 +0000 (0:00:00.723) 0:00:02.335 ********** 2026-03-08 00:47:41.584172 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-03-08 00:47:41.584177 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-03-08 00:47:41.584181 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-03-08 00:47:41.584185 | orchestrator | 2026-03-08 00:47:41.584189 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-03-08 00:47:41.584193 | orchestrator | Sunday 08 March 2026 00:47:12 +0000 (0:00:00.803) 0:00:03.139 ********** 2026-03-08 00:47:41.584196 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-03-08 00:47:41.584200 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-03-08 00:47:41.584205 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-03-08 00:47:41.584208 | orchestrator | 2026-03-08 00:47:41.584212 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2026-03-08 00:47:41.584216 | orchestrator | Sunday 08 March 2026 00:47:15 +0000 (0:00:02.133) 0:00:05.272 ********** 2026-03-08 00:47:41.584220 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:47:41.584224 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:47:41.584227 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:47:41.584231 | orchestrator | 2026-03-08 00:47:41.584235 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-03-08 00:47:41.584239 | orchestrator | Sunday 08 March 2026 00:47:17 +0000 (0:00:01.900) 0:00:07.173 ********** 2026-03-08 00:47:41.584243 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:47:41.584246 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:47:41.584250 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:47:41.584254 | orchestrator | 2026-03-08 00:47:41.584258 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:47:41.584278 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:47:41.584284 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:47:41.584288 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:47:41.584291 | orchestrator | 2026-03-08 00:47:41.584296 | orchestrator | 2026-03-08 00:47:41.584299 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 00:47:41.584303 | orchestrator | Sunday 08 March 2026 00:47:19 +0000 (0:00:02.318) 0:00:09.491 ********** 2026-03-08 00:47:41.584307 | orchestrator | =============================================================================== 2026-03-08 00:47:41.584311 | orchestrator | memcached : Restart memcached container --------------------------------- 2.32s 2026-03-08 00:47:41.584315 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.13s 2026-03-08 00:47:41.584321 | orchestrator | memcached : Check memcached container ----------------------------------- 1.90s 2026-03-08 00:47:41.584326 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.80s 2026-03-08 00:47:41.584334 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.72s 2026-03-08 00:47:41.584343 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.55s 2026-03-08 00:47:41.584348 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.48s 2026-03-08 00:47:41.584354 | orchestrator | 2026-03-08 00:47:41.584360 | orchestrator | 2026-03-08 00:47:41.584365 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-08 00:47:41.584391 | orchestrator | 2026-03-08 00:47:41.584399 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-08 00:47:41.584405 | orchestrator | Sunday 08 March 2026 00:47:09 +0000 (0:00:00.250) 0:00:00.250 ********** 2026-03-08 00:47:41.584411 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:47:41.584417 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:47:41.584479 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:47:41.584488 | orchestrator | 2026-03-08 00:47:41.584494 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-08 00:47:41.584515 | orchestrator | Sunday 08 March 2026 00:47:10 +0000 (0:00:00.526) 0:00:00.776 ********** 2026-03-08 00:47:41.584523 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-03-08 00:47:41.584529 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-03-08 00:47:41.584535 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-03-08 00:47:41.584541 | orchestrator | 2026-03-08 00:47:41.584547 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-03-08 00:47:41.584553 | orchestrator | 2026-03-08 00:47:41.584559 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-03-08 00:47:41.584565 | orchestrator | Sunday 08 March 2026 00:47:10 +0000 (0:00:00.713) 0:00:01.489 ********** 2026-03-08 00:47:41.584570 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:47:41.584576 | orchestrator | 2026-03-08 00:47:41.584582 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-03-08 00:47:41.584588 | orchestrator | Sunday 08 March 2026 00:47:11 +0000 (0:00:00.830) 0:00:02.320 ********** 2026-03-08 00:47:41.584598 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-08 00:47:41.584642 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-08 00:47:41.584651 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-08 00:47:41.584664 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-08 00:47:41.584680 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-08 00:47:41.584696 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-08 00:47:41.584703 | orchestrator | 2026-03-08 00:47:41.584708 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-03-08 00:47:41.584715 | orchestrator | Sunday 08 March 2026 00:47:13 +0000 (0:00:01.674) 0:00:03.994 ********** 2026-03-08 00:47:41.584720 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-08 00:47:41.584741 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-08 00:47:41.584747 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-08 00:47:41.584757 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-08 00:47:41.584767 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-08 00:47:41.584778 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-08 00:47:41.584784 | orchestrator | 2026-03-08 00:47:41.584790 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-03-08 00:47:41.584796 | orchestrator | Sunday 08 March 2026 00:47:16 +0000 (0:00:02.732) 0:00:06.727 ********** 2026-03-08 00:47:41.584802 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-08 00:47:41.584809 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-08 00:47:41.584815 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-08 00:47:41.584825 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-08 00:47:41.584837 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-08 00:47:41.584843 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-08 00:47:41.584850 | orchestrator | 2026-03-08 00:47:41.584860 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2026-03-08 00:47:41.584867 | orchestrator | Sunday 08 March 2026 00:47:19 +0000 (0:00:03.014) 0:00:09.741 ********** 2026-03-08 00:47:41.584873 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-08 00:47:41.584879 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-08 00:47:41.584885 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-08 00:47:41.584891 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-08 00:47:41.584905 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-08 00:47:41.584912 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-08 00:47:41.584918 | orchestrator | 2026-03-08 00:47:41.584924 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-08 00:47:41.584930 | orchestrator | Sunday 08 March 2026 00:47:21 +0000 (0:00:02.438) 0:00:12.179 ********** 2026-03-08 00:47:41.584935 | orchestrator | 2026-03-08 00:47:41.584941 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-08 00:47:41.584951 | orchestrator | Sunday 08 March 2026 00:47:21 +0000 (0:00:00.241) 0:00:12.421 ********** 2026-03-08 00:47:41.584958 | orchestrator | 2026-03-08 00:47:41.584964 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-08 00:47:41.585045 | orchestrator | Sunday 08 March 2026 00:47:22 +0000 (0:00:00.382) 0:00:12.803 ********** 2026-03-08 00:47:41.585052 | orchestrator | 2026-03-08 00:47:41.585058 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-03-08 00:47:41.585064 | orchestrator | Sunday 08 March 2026 00:47:22 +0000 (0:00:00.279) 0:00:13.083 ********** 2026-03-08 00:47:41.585071 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:47:41.585077 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:47:41.585083 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:47:41.585089 | orchestrator | 2026-03-08 00:47:41.585096 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-03-08 00:47:41.585102 | orchestrator | Sunday 08 March 2026 00:47:31 +0000 (0:00:09.598) 0:00:22.682 ********** 2026-03-08 00:47:41.585108 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:47:41.585114 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:47:41.585120 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:47:41.585126 | orchestrator | 2026-03-08 00:47:41.585133 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:47:41.585139 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:47:41.585147 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:47:41.585153 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:47:41.585165 | orchestrator | 2026-03-08 00:47:41.585171 | orchestrator | 2026-03-08 00:47:41.585189 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 00:47:41.585195 | orchestrator | Sunday 08 March 2026 00:47:39 +0000 (0:00:07.815) 0:00:30.497 ********** 2026-03-08 00:47:41.585201 | orchestrator | =============================================================================== 2026-03-08 00:47:41.585207 | orchestrator | redis : Restart redis container ----------------------------------------- 9.60s 2026-03-08 00:47:41.585213 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 7.82s 2026-03-08 00:47:41.585220 | orchestrator | redis : Copying over redis config files --------------------------------- 3.01s 2026-03-08 00:47:41.585226 | orchestrator | redis : Copying over default config.json files -------------------------- 2.73s 2026-03-08 00:47:41.585232 | orchestrator | redis : Check redis containers ------------------------------------------ 2.44s 2026-03-08 00:47:41.585239 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.67s 2026-03-08 00:47:41.585245 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.90s 2026-03-08 00:47:41.585251 | orchestrator | redis : include_tasks --------------------------------------------------- 0.83s 2026-03-08 00:47:41.585257 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.71s 2026-03-08 00:47:41.585263 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.53s 2026-03-08 00:47:41.585270 | orchestrator | 2026-03-08 00:47:41 | INFO  | Task 7eb81cce-a2fd-447d-8413-c70b803436d4 is in state STARTED 2026-03-08 00:47:41.585364 | orchestrator | 2026-03-08 00:47:41 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:47:41.585653 | orchestrator | 2026-03-08 00:47:41 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:47:41.586398 | orchestrator | 2026-03-08 00:47:41 | INFO  | Task 0090c2b7-ff53-4ec8-8b6d-3acf888e2829 is in state STARTED 2026-03-08 00:47:41.586548 | orchestrator | 2026-03-08 00:47:41 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:47:44.616148 | orchestrator | 2026-03-08 00:47:44 | INFO  | Task 90275470-a5b4-491a-874f-4bf51d7bc505 is in state STARTED 2026-03-08 00:47:44.617007 | orchestrator | 2026-03-08 00:47:44 | INFO  | Task 7eb81cce-a2fd-447d-8413-c70b803436d4 is in state STARTED 2026-03-08 00:47:44.617894 | orchestrator | 2026-03-08 00:47:44 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:47:44.619211 | orchestrator | 2026-03-08 00:47:44 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:47:44.620002 | orchestrator | 2026-03-08 00:47:44 | INFO  | Task 0090c2b7-ff53-4ec8-8b6d-3acf888e2829 is in state STARTED 2026-03-08 00:47:44.620027 | orchestrator | 2026-03-08 00:47:44 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:47:47.716452 | orchestrator | 2026-03-08 00:47:47 | INFO  | Task 90275470-a5b4-491a-874f-4bf51d7bc505 is in state STARTED 2026-03-08 00:47:47.716533 | orchestrator | 2026-03-08 00:47:47 | INFO  | Task 7eb81cce-a2fd-447d-8413-c70b803436d4 is in state STARTED 2026-03-08 00:47:47.716544 | orchestrator | 2026-03-08 00:47:47 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:47:47.716551 | orchestrator | 2026-03-08 00:47:47 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:47:47.716557 | orchestrator | 2026-03-08 00:47:47 | INFO  | Task 0090c2b7-ff53-4ec8-8b6d-3acf888e2829 is in state STARTED 2026-03-08 00:47:47.716564 | orchestrator | 2026-03-08 00:47:47 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:47:50.748560 | orchestrator | 2026-03-08 00:47:50 | INFO  | Task 90275470-a5b4-491a-874f-4bf51d7bc505 is in state STARTED 2026-03-08 00:47:50.748813 | orchestrator | 2026-03-08 00:47:50 | INFO  | Task 7eb81cce-a2fd-447d-8413-c70b803436d4 is in state STARTED 2026-03-08 00:47:50.749385 | orchestrator | 2026-03-08 00:47:50 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:47:50.750086 | orchestrator | 2026-03-08 00:47:50 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:47:50.750595 | orchestrator | 2026-03-08 00:47:50 | INFO  | Task 0090c2b7-ff53-4ec8-8b6d-3acf888e2829 is in state STARTED 2026-03-08 00:47:50.751671 | orchestrator | 2026-03-08 00:47:50 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:47:53.776939 | orchestrator | 2026-03-08 00:47:53 | INFO  | Task 90275470-a5b4-491a-874f-4bf51d7bc505 is in state STARTED 2026-03-08 00:47:53.778466 | orchestrator | 2026-03-08 00:47:53 | INFO  | Task 7eb81cce-a2fd-447d-8413-c70b803436d4 is in state STARTED 2026-03-08 00:47:53.778851 | orchestrator | 2026-03-08 00:47:53 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:47:53.779572 | orchestrator | 2026-03-08 00:47:53 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:47:53.780219 | orchestrator | 2026-03-08 00:47:53 | INFO  | Task 0090c2b7-ff53-4ec8-8b6d-3acf888e2829 is in state STARTED 2026-03-08 00:47:53.780256 | orchestrator | 2026-03-08 00:47:53 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:47:56.859800 | orchestrator | 2026-03-08 00:47:56 | INFO  | Task 90275470-a5b4-491a-874f-4bf51d7bc505 is in state STARTED 2026-03-08 00:47:56.859846 | orchestrator | 2026-03-08 00:47:56 | INFO  | Task 7eb81cce-a2fd-447d-8413-c70b803436d4 is in state STARTED 2026-03-08 00:47:56.859851 | orchestrator | 2026-03-08 00:47:56 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:47:56.859855 | orchestrator | 2026-03-08 00:47:56 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:47:56.859859 | orchestrator | 2026-03-08 00:47:56 | INFO  | Task 0090c2b7-ff53-4ec8-8b6d-3acf888e2829 is in state STARTED 2026-03-08 00:47:56.859863 | orchestrator | 2026-03-08 00:47:56 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:47:59.976459 | orchestrator | 2026-03-08 00:47:59 | INFO  | Task 90275470-a5b4-491a-874f-4bf51d7bc505 is in state STARTED 2026-03-08 00:47:59.976524 | orchestrator | 2026-03-08 00:47:59 | INFO  | Task 7eb81cce-a2fd-447d-8413-c70b803436d4 is in state STARTED 2026-03-08 00:47:59.976532 | orchestrator | 2026-03-08 00:47:59 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:47:59.976538 | orchestrator | 2026-03-08 00:47:59 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:47:59.976543 | orchestrator | 2026-03-08 00:47:59 | INFO  | Task 0090c2b7-ff53-4ec8-8b6d-3acf888e2829 is in state STARTED 2026-03-08 00:47:59.976549 | orchestrator | 2026-03-08 00:47:59 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:48:03.067001 | orchestrator | 2026-03-08 00:48:03 | INFO  | Task 90275470-a5b4-491a-874f-4bf51d7bc505 is in state STARTED 2026-03-08 00:48:03.069264 | orchestrator | 2026-03-08 00:48:03 | INFO  | Task 7eb81cce-a2fd-447d-8413-c70b803436d4 is in state STARTED 2026-03-08 00:48:03.069314 | orchestrator | 2026-03-08 00:48:03 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:48:03.071339 | orchestrator | 2026-03-08 00:48:03 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:48:03.073775 | orchestrator | 2026-03-08 00:48:03 | INFO  | Task 0090c2b7-ff53-4ec8-8b6d-3acf888e2829 is in state STARTED 2026-03-08 00:48:03.073815 | orchestrator | 2026-03-08 00:48:03 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:48:06.111234 | orchestrator | 2026-03-08 00:48:06 | INFO  | Task 90275470-a5b4-491a-874f-4bf51d7bc505 is in state STARTED 2026-03-08 00:48:06.111775 | orchestrator | 2026-03-08 00:48:06 | INFO  | Task 7eb81cce-a2fd-447d-8413-c70b803436d4 is in state STARTED 2026-03-08 00:48:06.112789 | orchestrator | 2026-03-08 00:48:06 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:48:06.113542 | orchestrator | 2026-03-08 00:48:06 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:48:06.114438 | orchestrator | 2026-03-08 00:48:06 | INFO  | Task 0090c2b7-ff53-4ec8-8b6d-3acf888e2829 is in state STARTED 2026-03-08 00:48:06.114462 | orchestrator | 2026-03-08 00:48:06 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:48:09.145277 | orchestrator | 2026-03-08 00:48:09 | INFO  | Task 90275470-a5b4-491a-874f-4bf51d7bc505 is in state STARTED 2026-03-08 00:48:09.147966 | orchestrator | 2026-03-08 00:48:09 | INFO  | Task 7eb81cce-a2fd-447d-8413-c70b803436d4 is in state STARTED 2026-03-08 00:48:09.150325 | orchestrator | 2026-03-08 00:48:09 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:48:09.152237 | orchestrator | 2026-03-08 00:48:09 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:48:09.153924 | orchestrator | 2026-03-08 00:48:09 | INFO  | Task 0090c2b7-ff53-4ec8-8b6d-3acf888e2829 is in state STARTED 2026-03-08 00:48:09.155204 | orchestrator | 2026-03-08 00:48:09 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:48:12.197033 | orchestrator | 2026-03-08 00:48:12 | INFO  | Task 90275470-a5b4-491a-874f-4bf51d7bc505 is in state STARTED 2026-03-08 00:48:12.198430 | orchestrator | 2026-03-08 00:48:12 | INFO  | Task 7eb81cce-a2fd-447d-8413-c70b803436d4 is in state STARTED 2026-03-08 00:48:12.201321 | orchestrator | 2026-03-08 00:48:12 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:48:12.203298 | orchestrator | 2026-03-08 00:48:12 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:48:12.205169 | orchestrator | 2026-03-08 00:48:12 | INFO  | Task 0090c2b7-ff53-4ec8-8b6d-3acf888e2829 is in state STARTED 2026-03-08 00:48:12.205221 | orchestrator | 2026-03-08 00:48:12 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:48:15.262509 | orchestrator | 2026-03-08 00:48:15 | INFO  | Task 90275470-a5b4-491a-874f-4bf51d7bc505 is in state STARTED 2026-03-08 00:48:15.264245 | orchestrator | 2026-03-08 00:48:15 | INFO  | Task 7eb81cce-a2fd-447d-8413-c70b803436d4 is in state SUCCESS 2026-03-08 00:48:15.265780 | orchestrator | 2026-03-08 00:48:15.265829 | orchestrator | 2026-03-08 00:48:15.265837 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-08 00:48:15.265843 | orchestrator | 2026-03-08 00:48:15.265849 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-08 00:48:15.265854 | orchestrator | Sunday 08 March 2026 00:47:09 +0000 (0:00:00.315) 0:00:00.315 ********** 2026-03-08 00:48:15.265857 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:48:15.265861 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:48:15.265864 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:48:15.265868 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:48:15.265871 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:48:15.265874 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:48:15.265877 | orchestrator | 2026-03-08 00:48:15.265880 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-08 00:48:15.265884 | orchestrator | Sunday 08 March 2026 00:47:10 +0000 (0:00:01.000) 0:00:01.315 ********** 2026-03-08 00:48:15.265888 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-08 00:48:15.265903 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-08 00:48:15.265915 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-08 00:48:15.265918 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-08 00:48:15.265922 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-08 00:48:15.265938 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-08 00:48:15.265943 | orchestrator | 2026-03-08 00:48:15.265946 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-03-08 00:48:15.265949 | orchestrator | 2026-03-08 00:48:15.265953 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-03-08 00:48:15.265956 | orchestrator | Sunday 08 March 2026 00:47:11 +0000 (0:00:00.929) 0:00:02.244 ********** 2026-03-08 00:48:15.265960 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 00:48:15.265964 | orchestrator | 2026-03-08 00:48:15.265967 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-08 00:48:15.265970 | orchestrator | Sunday 08 March 2026 00:47:12 +0000 (0:00:01.302) 0:00:03.547 ********** 2026-03-08 00:48:15.265974 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-03-08 00:48:15.265977 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-03-08 00:48:15.265980 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-03-08 00:48:15.265983 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-03-08 00:48:15.265987 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-03-08 00:48:15.265990 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-03-08 00:48:15.265993 | orchestrator | 2026-03-08 00:48:15.265996 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-08 00:48:15.265999 | orchestrator | Sunday 08 March 2026 00:47:14 +0000 (0:00:01.669) 0:00:05.217 ********** 2026-03-08 00:48:15.266002 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-03-08 00:48:15.266006 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-03-08 00:48:15.266009 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-03-08 00:48:15.266034 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-03-08 00:48:15.266056 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-03-08 00:48:15.266061 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-03-08 00:48:15.266064 | orchestrator | 2026-03-08 00:48:15.266067 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-08 00:48:15.266071 | orchestrator | Sunday 08 March 2026 00:47:16 +0000 (0:00:01.803) 0:00:07.020 ********** 2026-03-08 00:48:15.266076 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-03-08 00:48:15.266083 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:48:15.266091 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-03-08 00:48:15.266097 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:48:15.266102 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-03-08 00:48:15.266107 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:48:15.266112 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-03-08 00:48:15.266117 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:48:15.266123 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-03-08 00:48:15.266128 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:48:15.266134 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-03-08 00:48:15.266139 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:48:15.266144 | orchestrator | 2026-03-08 00:48:15.266148 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-03-08 00:48:15.266161 | orchestrator | Sunday 08 March 2026 00:47:18 +0000 (0:00:01.702) 0:00:08.722 ********** 2026-03-08 00:48:15.266169 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:48:15.266173 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:48:15.266179 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:48:15.266184 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:48:15.266189 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:48:15.266193 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:48:15.266198 | orchestrator | 2026-03-08 00:48:15.266203 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-03-08 00:48:15.266208 | orchestrator | Sunday 08 March 2026 00:47:18 +0000 (0:00:00.828) 0:00:09.551 ********** 2026-03-08 00:48:15.266227 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-08 00:48:15.266238 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-08 00:48:15.266260 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-08 00:48:15.266266 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-08 00:48:15.266272 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-08 00:48:15.266283 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-08 00:48:15.266293 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-08 00:48:15.266299 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-08 00:48:15.266305 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-08 00:48:15.266309 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-08 00:48:15.266312 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-08 00:48:15.266321 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-08 00:48:15.266325 | orchestrator | 2026-03-08 00:48:15.266328 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-03-08 00:48:15.266331 | orchestrator | Sunday 08 March 2026 00:47:21 +0000 (0:00:02.486) 0:00:12.038 ********** 2026-03-08 00:48:15.266337 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-08 00:48:15.266340 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-08 00:48:15.266344 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-08 00:48:15.266347 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-08 00:48:15.266352 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-08 00:48:15.266359 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-08 00:48:15.266365 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-08 00:48:15.266368 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-08 00:48:15.266371 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-08 00:48:15.266381 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-08 00:48:15.266386 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-08 00:48:15.266395 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-08 00:48:15.266401 | orchestrator | 2026-03-08 00:48:15.266406 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-03-08 00:48:15.266423 | orchestrator | Sunday 08 March 2026 00:47:25 +0000 (0:00:03.915) 0:00:15.953 ********** 2026-03-08 00:48:15.266430 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:48:15.266435 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:48:15.266441 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:48:15.266446 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:48:15.266451 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:48:15.266456 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:48:15.266461 | orchestrator | 2026-03-08 00:48:15.266467 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2026-03-08 00:48:15.266472 | orchestrator | Sunday 08 March 2026 00:47:26 +0000 (0:00:00.914) 0:00:16.868 ********** 2026-03-08 00:48:15.266478 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-08 00:48:15.266484 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-08 00:48:15.266494 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-08 00:48:15.266499 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-08 00:48:15.266506 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-08 00:48:15.266512 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-08 00:48:15.266524 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-08 00:48:15.266544 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-08 00:48:15.266551 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-08 00:48:15.266557 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-08 00:48:15.266566 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-08 00:48:15.266575 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-08 00:48:15.266580 | orchestrator | 2026-03-08 00:48:15.266585 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-08 00:48:15.266590 | orchestrator | Sunday 08 March 2026 00:47:28 +0000 (0:00:02.518) 0:00:19.387 ********** 2026-03-08 00:48:15.266595 | orchestrator | 2026-03-08 00:48:15.266600 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-08 00:48:15.266605 | orchestrator | Sunday 08 March 2026 00:47:28 +0000 (0:00:00.144) 0:00:19.531 ********** 2026-03-08 00:48:15.266614 | orchestrator | 2026-03-08 00:48:15.266619 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-08 00:48:15.266623 | orchestrator | Sunday 08 March 2026 00:47:29 +0000 (0:00:00.165) 0:00:19.696 ********** 2026-03-08 00:48:15.266628 | orchestrator | 2026-03-08 00:48:15.266633 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-08 00:48:15.266638 | orchestrator | Sunday 08 March 2026 00:47:29 +0000 (0:00:00.129) 0:00:19.826 ********** 2026-03-08 00:48:15.266642 | orchestrator | 2026-03-08 00:48:15.266647 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-08 00:48:15.266653 | orchestrator | Sunday 08 March 2026 00:47:29 +0000 (0:00:00.122) 0:00:19.948 ********** 2026-03-08 00:48:15.266658 | orchestrator | 2026-03-08 00:48:15.266663 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-08 00:48:15.266667 | orchestrator | Sunday 08 March 2026 00:47:29 +0000 (0:00:00.127) 0:00:20.075 ********** 2026-03-08 00:48:15.266672 | orchestrator | 2026-03-08 00:48:15.266677 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-03-08 00:48:15.266682 | orchestrator | Sunday 08 March 2026 00:47:29 +0000 (0:00:00.133) 0:00:20.209 ********** 2026-03-08 00:48:15.266688 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:48:15.266693 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:48:15.266698 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:48:15.266703 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:48:15.266707 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:48:15.266712 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:48:15.266716 | orchestrator | 2026-03-08 00:48:15.266721 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-03-08 00:48:15.266726 | orchestrator | Sunday 08 March 2026 00:47:38 +0000 (0:00:09.186) 0:00:29.396 ********** 2026-03-08 00:48:15.266731 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:48:15.266736 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:48:15.266741 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:48:15.266746 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:48:15.266751 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:48:15.266756 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:48:15.266763 | orchestrator | 2026-03-08 00:48:15.266771 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-03-08 00:48:15.266776 | orchestrator | Sunday 08 March 2026 00:47:40 +0000 (0:00:01.348) 0:00:30.744 ********** 2026-03-08 00:48:15.266781 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:48:15.266786 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:48:15.266791 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:48:15.266795 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:48:15.266801 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:48:15.266805 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:48:15.266811 | orchestrator | 2026-03-08 00:48:15.266816 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-03-08 00:48:15.266821 | orchestrator | Sunday 08 March 2026 00:47:50 +0000 (0:00:09.821) 0:00:40.566 ********** 2026-03-08 00:48:15.266826 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-03-08 00:48:15.266831 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-03-08 00:48:15.266837 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-03-08 00:48:15.266842 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-03-08 00:48:15.266846 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-03-08 00:48:15.266956 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-03-08 00:48:15.266971 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-03-08 00:48:15.266977 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-03-08 00:48:15.266982 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-03-08 00:48:15.266991 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-03-08 00:48:15.266996 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-03-08 00:48:15.267001 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-03-08 00:48:15.267007 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-08 00:48:15.267012 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-08 00:48:15.267017 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-08 00:48:15.267022 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-08 00:48:15.267027 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-08 00:48:15.267033 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-08 00:48:15.267041 | orchestrator | 2026-03-08 00:48:15.267047 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-03-08 00:48:15.267051 | orchestrator | Sunday 08 March 2026 00:47:58 +0000 (0:00:08.338) 0:00:48.904 ********** 2026-03-08 00:48:15.267056 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-03-08 00:48:15.267060 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:48:15.267065 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-03-08 00:48:15.267069 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:48:15.267074 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-03-08 00:48:15.267079 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:48:15.267083 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-03-08 00:48:15.267088 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-03-08 00:48:15.267093 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-03-08 00:48:15.267099 | orchestrator | 2026-03-08 00:48:15.267104 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-03-08 00:48:15.267109 | orchestrator | Sunday 08 March 2026 00:48:01 +0000 (0:00:02.757) 0:00:51.662 ********** 2026-03-08 00:48:15.267114 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-03-08 00:48:15.267119 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-03-08 00:48:15.267124 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:48:15.267129 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:48:15.267134 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-03-08 00:48:15.267139 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:48:15.267144 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-03-08 00:48:15.267149 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-03-08 00:48:15.267154 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-03-08 00:48:15.267160 | orchestrator | 2026-03-08 00:48:15.267165 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-03-08 00:48:15.267170 | orchestrator | Sunday 08 March 2026 00:48:04 +0000 (0:00:03.747) 0:00:55.409 ********** 2026-03-08 00:48:15.267176 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:48:15.267186 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:48:15.267191 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:48:15.267197 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:48:15.267202 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:48:15.267207 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:48:15.267212 | orchestrator | 2026-03-08 00:48:15.267218 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:48:15.267223 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-08 00:48:15.267229 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-08 00:48:15.267234 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-08 00:48:15.267239 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-08 00:48:15.267245 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-08 00:48:15.267258 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-08 00:48:15.267266 | orchestrator | 2026-03-08 00:48:15.267271 | orchestrator | 2026-03-08 00:48:15.267276 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 00:48:15.267281 | orchestrator | Sunday 08 March 2026 00:48:13 +0000 (0:00:08.884) 0:01:04.294 ********** 2026-03-08 00:48:15.267286 | orchestrator | =============================================================================== 2026-03-08 00:48:15.267292 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 18.71s 2026-03-08 00:48:15.267301 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 9.19s 2026-03-08 00:48:15.267306 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 8.34s 2026-03-08 00:48:15.267311 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.92s 2026-03-08 00:48:15.267317 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.75s 2026-03-08 00:48:15.267321 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.76s 2026-03-08 00:48:15.267327 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 2.52s 2026-03-08 00:48:15.267332 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.49s 2026-03-08 00:48:15.267337 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.80s 2026-03-08 00:48:15.267343 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.70s 2026-03-08 00:48:15.267347 | orchestrator | module-load : Load modules ---------------------------------------------- 1.67s 2026-03-08 00:48:15.267350 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.35s 2026-03-08 00:48:15.267353 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.30s 2026-03-08 00:48:15.267356 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.00s 2026-03-08 00:48:15.267360 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.93s 2026-03-08 00:48:15.267363 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 0.92s 2026-03-08 00:48:15.267366 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.83s 2026-03-08 00:48:15.267369 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 0.82s 2026-03-08 00:48:15.267372 | orchestrator | 2026-03-08 00:48:15 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:48:15.272405 | orchestrator | 2026-03-08 00:48:15 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:48:15.274276 | orchestrator | 2026-03-08 00:48:15 | INFO  | Task 20c38e53-9724-4435-b3ed-dca92c029193 is in state STARTED 2026-03-08 00:48:15.276232 | orchestrator | 2026-03-08 00:48:15 | INFO  | Task 0090c2b7-ff53-4ec8-8b6d-3acf888e2829 is in state STARTED 2026-03-08 00:48:15.276284 | orchestrator | 2026-03-08 00:48:15 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:48:18.310082 | orchestrator | 2026-03-08 00:48:18 | INFO  | Task 90275470-a5b4-491a-874f-4bf51d7bc505 is in state STARTED 2026-03-08 00:48:18.311750 | orchestrator | 2026-03-08 00:48:18 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:48:18.313137 | orchestrator | 2026-03-08 00:48:18 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:48:18.314551 | orchestrator | 2026-03-08 00:48:18 | INFO  | Task 20c38e53-9724-4435-b3ed-dca92c029193 is in state STARTED 2026-03-08 00:48:18.315843 | orchestrator | 2026-03-08 00:48:18 | INFO  | Task 0090c2b7-ff53-4ec8-8b6d-3acf888e2829 is in state STARTED 2026-03-08 00:48:18.315973 | orchestrator | 2026-03-08 00:48:18 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:48:21.361473 | orchestrator | 2026-03-08 00:48:21 | INFO  | Task 90275470-a5b4-491a-874f-4bf51d7bc505 is in state STARTED 2026-03-08 00:48:21.361650 | orchestrator | 2026-03-08 00:48:21 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:48:21.364639 | orchestrator | 2026-03-08 00:48:21 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:48:21.365453 | orchestrator | 2026-03-08 00:48:21 | INFO  | Task 20c38e53-9724-4435-b3ed-dca92c029193 is in state STARTED 2026-03-08 00:48:21.366436 | orchestrator | 2026-03-08 00:48:21 | INFO  | Task 0090c2b7-ff53-4ec8-8b6d-3acf888e2829 is in state STARTED 2026-03-08 00:48:21.366460 | orchestrator | 2026-03-08 00:48:21 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:48:24.408245 | orchestrator | 2026-03-08 00:48:24 | INFO  | Task 90275470-a5b4-491a-874f-4bf51d7bc505 is in state STARTED 2026-03-08 00:48:24.408303 | orchestrator | 2026-03-08 00:48:24 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:48:24.409105 | orchestrator | 2026-03-08 00:48:24 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:48:24.409842 | orchestrator | 2026-03-08 00:48:24 | INFO  | Task 20c38e53-9724-4435-b3ed-dca92c029193 is in state STARTED 2026-03-08 00:48:24.410783 | orchestrator | 2026-03-08 00:48:24 | INFO  | Task 0090c2b7-ff53-4ec8-8b6d-3acf888e2829 is in state STARTED 2026-03-08 00:48:24.410810 | orchestrator | 2026-03-08 00:48:24 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:48:27.495336 | orchestrator | 2026-03-08 00:48:27 | INFO  | Task 90275470-a5b4-491a-874f-4bf51d7bc505 is in state STARTED 2026-03-08 00:48:27.496932 | orchestrator | 2026-03-08 00:48:27 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:48:27.497581 | orchestrator | 2026-03-08 00:48:27 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:48:27.499480 | orchestrator | 2026-03-08 00:48:27 | INFO  | Task 20c38e53-9724-4435-b3ed-dca92c029193 is in state STARTED 2026-03-08 00:48:27.500094 | orchestrator | 2026-03-08 00:48:27 | INFO  | Task 0090c2b7-ff53-4ec8-8b6d-3acf888e2829 is in state STARTED 2026-03-08 00:48:27.500119 | orchestrator | 2026-03-08 00:48:27 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:48:30.534774 | orchestrator | 2026-03-08 00:48:30 | INFO  | Task 90275470-a5b4-491a-874f-4bf51d7bc505 is in state STARTED 2026-03-08 00:48:30.535838 | orchestrator | 2026-03-08 00:48:30 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:48:30.537426 | orchestrator | 2026-03-08 00:48:30 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:48:30.538343 | orchestrator | 2026-03-08 00:48:30 | INFO  | Task 20c38e53-9724-4435-b3ed-dca92c029193 is in state STARTED 2026-03-08 00:48:30.539176 | orchestrator | 2026-03-08 00:48:30 | INFO  | Task 0090c2b7-ff53-4ec8-8b6d-3acf888e2829 is in state STARTED 2026-03-08 00:48:30.539283 | orchestrator | 2026-03-08 00:48:30 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:48:33.585096 | orchestrator | 2026-03-08 00:48:33 | INFO  | Task 90275470-a5b4-491a-874f-4bf51d7bc505 is in state STARTED 2026-03-08 00:48:33.585598 | orchestrator | 2026-03-08 00:48:33 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:48:33.587376 | orchestrator | 2026-03-08 00:48:33 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:48:33.588673 | orchestrator | 2026-03-08 00:48:33 | INFO  | Task 20c38e53-9724-4435-b3ed-dca92c029193 is in state STARTED 2026-03-08 00:48:33.589213 | orchestrator | 2026-03-08 00:48:33 | INFO  | Task 0090c2b7-ff53-4ec8-8b6d-3acf888e2829 is in state STARTED 2026-03-08 00:48:33.589240 | orchestrator | 2026-03-08 00:48:33 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:48:36.629777 | orchestrator | 2026-03-08 00:48:36 | INFO  | Task 90275470-a5b4-491a-874f-4bf51d7bc505 is in state STARTED 2026-03-08 00:48:36.633274 | orchestrator | 2026-03-08 00:48:36 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:48:36.633325 | orchestrator | 2026-03-08 00:48:36 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:48:36.634309 | orchestrator | 2026-03-08 00:48:36 | INFO  | Task 20c38e53-9724-4435-b3ed-dca92c029193 is in state STARTED 2026-03-08 00:48:36.635242 | orchestrator | 2026-03-08 00:48:36 | INFO  | Task 0090c2b7-ff53-4ec8-8b6d-3acf888e2829 is in state STARTED 2026-03-08 00:48:36.635259 | orchestrator | 2026-03-08 00:48:36 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:48:39.706238 | orchestrator | 2026-03-08 00:48:39 | INFO  | Task 90275470-a5b4-491a-874f-4bf51d7bc505 is in state STARTED 2026-03-08 00:48:39.707326 | orchestrator | 2026-03-08 00:48:39 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:48:39.709805 | orchestrator | 2026-03-08 00:48:39 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:48:39.710844 | orchestrator | 2026-03-08 00:48:39 | INFO  | Task 20c38e53-9724-4435-b3ed-dca92c029193 is in state STARTED 2026-03-08 00:48:39.712115 | orchestrator | 2026-03-08 00:48:39 | INFO  | Task 0090c2b7-ff53-4ec8-8b6d-3acf888e2829 is in state STARTED 2026-03-08 00:48:39.713100 | orchestrator | 2026-03-08 00:48:39 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:48:42.748739 | orchestrator | 2026-03-08 00:48:42 | INFO  | Task 90275470-a5b4-491a-874f-4bf51d7bc505 is in state STARTED 2026-03-08 00:48:42.750731 | orchestrator | 2026-03-08 00:48:42 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:48:42.753846 | orchestrator | 2026-03-08 00:48:42 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:48:42.756834 | orchestrator | 2026-03-08 00:48:42 | INFO  | Task 20c38e53-9724-4435-b3ed-dca92c029193 is in state STARTED 2026-03-08 00:48:42.758703 | orchestrator | 2026-03-08 00:48:42 | INFO  | Task 0090c2b7-ff53-4ec8-8b6d-3acf888e2829 is in state STARTED 2026-03-08 00:48:42.758771 | orchestrator | 2026-03-08 00:48:42 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:48:45.799249 | orchestrator | 2026-03-08 00:48:45 | INFO  | Task 90275470-a5b4-491a-874f-4bf51d7bc505 is in state STARTED 2026-03-08 00:48:45.804052 | orchestrator | 2026-03-08 00:48:45 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:48:45.804104 | orchestrator | 2026-03-08 00:48:45 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:48:45.805596 | orchestrator | 2026-03-08 00:48:45 | INFO  | Task 20c38e53-9724-4435-b3ed-dca92c029193 is in state STARTED 2026-03-08 00:48:45.806707 | orchestrator | 2026-03-08 00:48:45 | INFO  | Task 0090c2b7-ff53-4ec8-8b6d-3acf888e2829 is in state STARTED 2026-03-08 00:48:45.806896 | orchestrator | 2026-03-08 00:48:45 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:48:48.936427 | orchestrator | 2026-03-08 00:48:48 | INFO  | Task 90275470-a5b4-491a-874f-4bf51d7bc505 is in state STARTED 2026-03-08 00:48:48.938506 | orchestrator | 2026-03-08 00:48:48 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:48:48.940787 | orchestrator | 2026-03-08 00:48:48 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:48:48.942723 | orchestrator | 2026-03-08 00:48:48 | INFO  | Task 20c38e53-9724-4435-b3ed-dca92c029193 is in state STARTED 2026-03-08 00:48:48.944150 | orchestrator | 2026-03-08 00:48:48 | INFO  | Task 0090c2b7-ff53-4ec8-8b6d-3acf888e2829 is in state STARTED 2026-03-08 00:48:48.944518 | orchestrator | 2026-03-08 00:48:48 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:48:52.015326 | orchestrator | 2026-03-08 00:48:52 | INFO  | Task 90275470-a5b4-491a-874f-4bf51d7bc505 is in state STARTED 2026-03-08 00:48:52.015708 | orchestrator | 2026-03-08 00:48:52 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:48:52.017546 | orchestrator | 2026-03-08 00:48:52 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:48:52.018537 | orchestrator | 2026-03-08 00:48:52 | INFO  | Task 20c38e53-9724-4435-b3ed-dca92c029193 is in state STARTED 2026-03-08 00:48:52.019464 | orchestrator | 2026-03-08 00:48:52 | INFO  | Task 0090c2b7-ff53-4ec8-8b6d-3acf888e2829 is in state STARTED 2026-03-08 00:48:52.019506 | orchestrator | 2026-03-08 00:48:52 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:48:55.057110 | orchestrator | 2026-03-08 00:48:55 | INFO  | Task 90275470-a5b4-491a-874f-4bf51d7bc505 is in state STARTED 2026-03-08 00:48:55.060528 | orchestrator | 2026-03-08 00:48:55 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:48:55.084061 | orchestrator | 2026-03-08 00:48:55 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:48:55.090181 | orchestrator | 2026-03-08 00:48:55 | INFO  | Task 20c38e53-9724-4435-b3ed-dca92c029193 is in state STARTED 2026-03-08 00:48:55.092345 | orchestrator | 2026-03-08 00:48:55 | INFO  | Task 0090c2b7-ff53-4ec8-8b6d-3acf888e2829 is in state STARTED 2026-03-08 00:48:55.092402 | orchestrator | 2026-03-08 00:48:55 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:48:58.195543 | orchestrator | 2026-03-08 00:48:58 | INFO  | Task 90275470-a5b4-491a-874f-4bf51d7bc505 is in state STARTED 2026-03-08 00:48:58.195631 | orchestrator | 2026-03-08 00:48:58 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:48:58.195640 | orchestrator | 2026-03-08 00:48:58 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:48:58.195671 | orchestrator | 2026-03-08 00:48:58 | INFO  | Task 20c38e53-9724-4435-b3ed-dca92c029193 is in state STARTED 2026-03-08 00:48:58.195678 | orchestrator | 2026-03-08 00:48:58 | INFO  | Task 0090c2b7-ff53-4ec8-8b6d-3acf888e2829 is in state STARTED 2026-03-08 00:48:58.195685 | orchestrator | 2026-03-08 00:48:58 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:49:01.312302 | orchestrator | 2026-03-08 00:49:01 | INFO  | Task 90275470-a5b4-491a-874f-4bf51d7bc505 is in state STARTED 2026-03-08 00:49:01.312758 | orchestrator | 2026-03-08 00:49:01 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:49:01.312769 | orchestrator | 2026-03-08 00:49:01 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:49:01.312774 | orchestrator | 2026-03-08 00:49:01 | INFO  | Task 20c38e53-9724-4435-b3ed-dca92c029193 is in state STARTED 2026-03-08 00:49:01.312778 | orchestrator | 2026-03-08 00:49:01 | INFO  | Task 0090c2b7-ff53-4ec8-8b6d-3acf888e2829 is in state STARTED 2026-03-08 00:49:01.312790 | orchestrator | 2026-03-08 00:49:01 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:49:04.405665 | orchestrator | 2026-03-08 00:49:04 | INFO  | Task 90275470-a5b4-491a-874f-4bf51d7bc505 is in state STARTED 2026-03-08 00:49:04.405736 | orchestrator | 2026-03-08 00:49:04 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:49:04.405742 | orchestrator | 2026-03-08 00:49:04 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:49:04.405747 | orchestrator | 2026-03-08 00:49:04 | INFO  | Task 20c38e53-9724-4435-b3ed-dca92c029193 is in state STARTED 2026-03-08 00:49:04.405752 | orchestrator | 2026-03-08 00:49:04 | INFO  | Task 0090c2b7-ff53-4ec8-8b6d-3acf888e2829 is in state STARTED 2026-03-08 00:49:04.405756 | orchestrator | 2026-03-08 00:49:04 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:49:07.451895 | orchestrator | 2026-03-08 00:49:07 | INFO  | Task 90275470-a5b4-491a-874f-4bf51d7bc505 is in state STARTED 2026-03-08 00:49:07.451989 | orchestrator | 2026-03-08 00:49:07 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:49:07.452001 | orchestrator | 2026-03-08 00:49:07 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:49:07.452007 | orchestrator | 2026-03-08 00:49:07 | INFO  | Task 20c38e53-9724-4435-b3ed-dca92c029193 is in state STARTED 2026-03-08 00:49:07.453704 | orchestrator | 2026-03-08 00:49:07 | INFO  | Task 0090c2b7-ff53-4ec8-8b6d-3acf888e2829 is in state STARTED 2026-03-08 00:49:07.453777 | orchestrator | 2026-03-08 00:49:07 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:49:10.483688 | orchestrator | 2026-03-08 00:49:10 | INFO  | Task 90275470-a5b4-491a-874f-4bf51d7bc505 is in state STARTED 2026-03-08 00:49:10.484119 | orchestrator | 2026-03-08 00:49:10 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:49:10.485373 | orchestrator | 2026-03-08 00:49:10 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:49:10.487484 | orchestrator | 2026-03-08 00:49:10 | INFO  | Task 20c38e53-9724-4435-b3ed-dca92c029193 is in state STARTED 2026-03-08 00:49:10.487775 | orchestrator | 2026-03-08 00:49:10 | INFO  | Task 0090c2b7-ff53-4ec8-8b6d-3acf888e2829 is in state STARTED 2026-03-08 00:49:10.487903 | orchestrator | 2026-03-08 00:49:10 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:49:13.529279 | orchestrator | 2026-03-08 00:49:13 | INFO  | Task 90275470-a5b4-491a-874f-4bf51d7bc505 is in state SUCCESS 2026-03-08 00:49:13.530532 | orchestrator | 2026-03-08 00:49:13.530606 | orchestrator | 2026-03-08 00:49:13.530616 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-03-08 00:49:13.530624 | orchestrator | 2026-03-08 00:49:13.530631 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-03-08 00:49:13.530638 | orchestrator | Sunday 08 March 2026 00:44:46 +0000 (0:00:00.231) 0:00:00.231 ********** 2026-03-08 00:49:13.530644 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:49:13.530651 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:49:13.530658 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:49:13.530664 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:49:13.530670 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:49:13.530676 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:49:13.530682 | orchestrator | 2026-03-08 00:49:13.530689 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-03-08 00:49:13.530697 | orchestrator | Sunday 08 March 2026 00:44:46 +0000 (0:00:00.657) 0:00:00.888 ********** 2026-03-08 00:49:13.530704 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:49:13.530712 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:49:13.530718 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:49:13.530724 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:49:13.530731 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:49:13.530736 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:49:13.530740 | orchestrator | 2026-03-08 00:49:13.530744 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-03-08 00:49:13.530748 | orchestrator | Sunday 08 March 2026 00:44:47 +0000 (0:00:00.535) 0:00:01.424 ********** 2026-03-08 00:49:13.530752 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:49:13.530756 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:49:13.530760 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:49:13.530764 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:49:13.530768 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:49:13.530771 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:49:13.530775 | orchestrator | 2026-03-08 00:49:13.530781 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-03-08 00:49:13.530787 | orchestrator | Sunday 08 March 2026 00:44:47 +0000 (0:00:00.581) 0:00:02.006 ********** 2026-03-08 00:49:13.530793 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:49:13.530802 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:49:13.530810 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:49:13.530875 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:49:13.530882 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:49:13.530887 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:49:13.530893 | orchestrator | 2026-03-08 00:49:13.530899 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-03-08 00:49:13.530904 | orchestrator | Sunday 08 March 2026 00:44:50 +0000 (0:00:02.236) 0:00:04.242 ********** 2026-03-08 00:49:13.530909 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:49:13.530915 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:49:13.530937 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:49:13.530944 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:49:13.530950 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:49:13.531022 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:49:13.531029 | orchestrator | 2026-03-08 00:49:13.531035 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-03-08 00:49:13.531041 | orchestrator | Sunday 08 March 2026 00:44:51 +0000 (0:00:01.668) 0:00:05.911 ********** 2026-03-08 00:49:13.531049 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:49:13.531057 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:49:13.531064 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:49:13.531070 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:49:13.531076 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:49:13.531081 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:49:13.531087 | orchestrator | 2026-03-08 00:49:13.531111 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-03-08 00:49:13.531118 | orchestrator | Sunday 08 March 2026 00:44:53 +0000 (0:00:01.738) 0:00:07.649 ********** 2026-03-08 00:49:13.531125 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:49:13.531131 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:49:13.531138 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:49:13.531144 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:49:13.531151 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:49:13.531155 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:49:13.531160 | orchestrator | 2026-03-08 00:49:13.531164 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-03-08 00:49:13.531168 | orchestrator | Sunday 08 March 2026 00:44:54 +0000 (0:00:00.550) 0:00:08.200 ********** 2026-03-08 00:49:13.531173 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:49:13.531177 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:49:13.531181 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:49:13.531186 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:49:13.531190 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:49:13.531194 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:49:13.531199 | orchestrator | 2026-03-08 00:49:13.531203 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-03-08 00:49:13.531208 | orchestrator | Sunday 08 March 2026 00:44:54 +0000 (0:00:00.818) 0:00:09.019 ********** 2026-03-08 00:49:13.531212 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-08 00:49:13.531216 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-08 00:49:13.531221 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:49:13.531225 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-08 00:49:13.531229 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-08 00:49:13.531234 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:49:13.531238 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-08 00:49:13.531243 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-08 00:49:13.531248 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:49:13.531255 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-08 00:49:13.531279 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-08 00:49:13.531285 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:49:13.531291 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-08 00:49:13.531297 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-08 00:49:13.531303 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:49:13.531309 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-08 00:49:13.531315 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-08 00:49:13.531320 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:49:13.531326 | orchestrator | 2026-03-08 00:49:13.531332 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-03-08 00:49:13.531338 | orchestrator | Sunday 08 March 2026 00:44:55 +0000 (0:00:00.577) 0:00:09.596 ********** 2026-03-08 00:49:13.531345 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:49:13.531350 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:49:13.531356 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:49:13.531362 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:49:13.531368 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:49:13.531374 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:49:13.531380 | orchestrator | 2026-03-08 00:49:13.531386 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-03-08 00:49:13.531401 | orchestrator | Sunday 08 March 2026 00:44:56 +0000 (0:00:01.453) 0:00:11.050 ********** 2026-03-08 00:49:13.531407 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:49:13.531414 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:49:13.531420 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:49:13.531425 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:49:13.531431 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:49:13.531437 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:49:13.531442 | orchestrator | 2026-03-08 00:49:13.531448 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-03-08 00:49:13.531454 | orchestrator | Sunday 08 March 2026 00:44:57 +0000 (0:00:00.725) 0:00:11.776 ********** 2026-03-08 00:49:13.531460 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:49:13.531465 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:49:13.531470 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:49:13.531476 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:49:13.531482 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:49:13.531488 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:49:13.531493 | orchestrator | 2026-03-08 00:49:13.531499 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-03-08 00:49:13.531505 | orchestrator | Sunday 08 March 2026 00:45:03 +0000 (0:00:05.508) 0:00:17.285 ********** 2026-03-08 00:49:13.531510 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:49:13.531516 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:49:13.531529 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:49:13.531535 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:49:13.531541 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:49:13.531546 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:49:13.531552 | orchestrator | 2026-03-08 00:49:13.531558 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-03-08 00:49:13.531563 | orchestrator | Sunday 08 March 2026 00:45:04 +0000 (0:00:01.343) 0:00:18.629 ********** 2026-03-08 00:49:13.531568 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:49:13.531574 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:49:13.531580 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:49:13.531586 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:49:13.531592 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:49:13.531597 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:49:13.531603 | orchestrator | 2026-03-08 00:49:13.531609 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-03-08 00:49:13.531616 | orchestrator | Sunday 08 March 2026 00:45:06 +0000 (0:00:02.477) 0:00:21.106 ********** 2026-03-08 00:49:13.531621 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:49:13.531627 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:49:13.531632 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:49:13.531637 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:49:13.531643 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:49:13.531649 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:49:13.531655 | orchestrator | 2026-03-08 00:49:13.531660 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-03-08 00:49:13.531666 | orchestrator | Sunday 08 March 2026 00:45:08 +0000 (0:00:01.364) 0:00:22.471 ********** 2026-03-08 00:49:13.531672 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-03-08 00:49:13.531678 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-03-08 00:49:13.531684 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:49:13.531690 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-03-08 00:49:13.531696 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-03-08 00:49:13.531702 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:49:13.531707 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-03-08 00:49:13.531713 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-03-08 00:49:13.531719 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:49:13.531733 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-03-08 00:49:13.531738 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-03-08 00:49:13.531745 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-03-08 00:49:13.531752 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-03-08 00:49:13.531757 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:49:13.531764 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:49:13.531770 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-03-08 00:49:13.531776 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-03-08 00:49:13.531782 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:49:13.531787 | orchestrator | 2026-03-08 00:49:13.531793 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-03-08 00:49:13.531809 | orchestrator | Sunday 08 March 2026 00:45:10 +0000 (0:00:02.166) 0:00:24.637 ********** 2026-03-08 00:49:13.531840 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:49:13.531847 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:49:13.531852 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:49:13.531857 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:49:13.531863 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:49:13.531868 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:49:13.531873 | orchestrator | 2026-03-08 00:49:13.531878 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-03-08 00:49:13.531884 | orchestrator | Sunday 08 March 2026 00:45:11 +0000 (0:00:00.743) 0:00:25.381 ********** 2026-03-08 00:49:13.531890 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:49:13.531894 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:49:13.531898 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:49:13.531901 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:49:13.531905 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:49:13.531909 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:49:13.531913 | orchestrator | 2026-03-08 00:49:13.531916 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-03-08 00:49:13.531920 | orchestrator | 2026-03-08 00:49:13.531924 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-03-08 00:49:13.531928 | orchestrator | Sunday 08 March 2026 00:45:12 +0000 (0:00:01.438) 0:00:26.820 ********** 2026-03-08 00:49:13.531933 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:49:13.531939 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:49:13.531945 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:49:13.531962 | orchestrator | 2026-03-08 00:49:13.531972 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-03-08 00:49:13.531978 | orchestrator | Sunday 08 March 2026 00:45:14 +0000 (0:00:01.675) 0:00:28.496 ********** 2026-03-08 00:49:13.531984 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:49:13.531989 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:49:13.531995 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:49:13.532001 | orchestrator | 2026-03-08 00:49:13.532007 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-03-08 00:49:13.532012 | orchestrator | Sunday 08 March 2026 00:45:16 +0000 (0:00:01.721) 0:00:30.218 ********** 2026-03-08 00:49:13.532018 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:49:13.532024 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:49:13.532030 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:49:13.532036 | orchestrator | 2026-03-08 00:49:13.532042 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-03-08 00:49:13.532048 | orchestrator | Sunday 08 March 2026 00:45:17 +0000 (0:00:01.159) 0:00:31.377 ********** 2026-03-08 00:49:13.532054 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:49:13.532061 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:49:13.532067 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:49:13.532074 | orchestrator | 2026-03-08 00:49:13.532087 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-03-08 00:49:13.532101 | orchestrator | Sunday 08 March 2026 00:45:17 +0000 (0:00:00.775) 0:00:32.153 ********** 2026-03-08 00:49:13.532106 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:49:13.532110 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:49:13.532114 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:49:13.532118 | orchestrator | 2026-03-08 00:49:13.532121 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-03-08 00:49:13.532125 | orchestrator | Sunday 08 March 2026 00:45:18 +0000 (0:00:00.280) 0:00:32.433 ********** 2026-03-08 00:49:13.532129 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:49:13.532133 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:49:13.532136 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:49:13.532140 | orchestrator | 2026-03-08 00:49:13.532144 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-03-08 00:49:13.532148 | orchestrator | Sunday 08 March 2026 00:45:19 +0000 (0:00:00.977) 0:00:33.411 ********** 2026-03-08 00:49:13.532151 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:49:13.532155 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:49:13.532159 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:49:13.532163 | orchestrator | 2026-03-08 00:49:13.532166 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-03-08 00:49:13.532170 | orchestrator | Sunday 08 March 2026 00:45:21 +0000 (0:00:01.953) 0:00:35.365 ********** 2026-03-08 00:49:13.532174 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:49:13.532178 | orchestrator | 2026-03-08 00:49:13.532181 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-03-08 00:49:13.532185 | orchestrator | Sunday 08 March 2026 00:45:21 +0000 (0:00:00.598) 0:00:35.963 ********** 2026-03-08 00:49:13.532189 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:49:13.532193 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:49:13.532196 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:49:13.532200 | orchestrator | 2026-03-08 00:49:13.532204 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-03-08 00:49:13.532208 | orchestrator | Sunday 08 March 2026 00:45:26 +0000 (0:00:04.416) 0:00:40.380 ********** 2026-03-08 00:49:13.532211 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:49:13.532215 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:49:13.532219 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:49:13.532222 | orchestrator | 2026-03-08 00:49:13.532226 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-03-08 00:49:13.532230 | orchestrator | Sunday 08 March 2026 00:45:26 +0000 (0:00:00.604) 0:00:40.984 ********** 2026-03-08 00:49:13.532234 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:49:13.532238 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:49:13.532241 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:49:13.532245 | orchestrator | 2026-03-08 00:49:13.532249 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-03-08 00:49:13.532252 | orchestrator | Sunday 08 March 2026 00:45:28 +0000 (0:00:01.194) 0:00:42.179 ********** 2026-03-08 00:49:13.532256 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:49:13.532260 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:49:13.532264 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:49:13.532267 | orchestrator | 2026-03-08 00:49:13.532271 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-03-08 00:49:13.532281 | orchestrator | Sunday 08 March 2026 00:45:30 +0000 (0:00:02.023) 0:00:44.202 ********** 2026-03-08 00:49:13.532285 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:49:13.532289 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:49:13.532293 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:49:13.532297 | orchestrator | 2026-03-08 00:49:13.532300 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-03-08 00:49:13.532304 | orchestrator | Sunday 08 March 2026 00:45:30 +0000 (0:00:00.501) 0:00:44.704 ********** 2026-03-08 00:49:13.532312 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:49:13.532315 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:49:13.532319 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:49:13.532323 | orchestrator | 2026-03-08 00:49:13.532327 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-03-08 00:49:13.532330 | orchestrator | Sunday 08 March 2026 00:45:30 +0000 (0:00:00.308) 0:00:45.012 ********** 2026-03-08 00:49:13.532334 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:49:13.532338 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:49:13.532341 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:49:13.532345 | orchestrator | 2026-03-08 00:49:13.532349 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-03-08 00:49:13.532352 | orchestrator | Sunday 08 March 2026 00:45:32 +0000 (0:00:01.748) 0:00:46.761 ********** 2026-03-08 00:49:13.532356 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:49:13.532360 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:49:13.532364 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:49:13.532367 | orchestrator | 2026-03-08 00:49:13.532371 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-03-08 00:49:13.532375 | orchestrator | Sunday 08 March 2026 00:45:35 +0000 (0:00:02.970) 0:00:49.732 ********** 2026-03-08 00:49:13.532379 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:49:13.532382 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:49:13.532386 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:49:13.532390 | orchestrator | 2026-03-08 00:49:13.532394 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-03-08 00:49:13.532397 | orchestrator | Sunday 08 March 2026 00:45:36 +0000 (0:00:00.576) 0:00:50.308 ********** 2026-03-08 00:49:13.532401 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-08 00:49:13.532406 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-08 00:49:13.532410 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-08 00:49:13.532417 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-08 00:49:13.532421 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-08 00:49:13.532425 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-08 00:49:13.532429 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-08 00:49:13.532433 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-08 00:49:13.532436 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-08 00:49:13.532440 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-08 00:49:13.532444 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-08 00:49:13.532448 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-08 00:49:13.532451 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:49:13.532455 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:49:13.532459 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:49:13.532463 | orchestrator | 2026-03-08 00:49:13.532470 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-03-08 00:49:13.532474 | orchestrator | Sunday 08 March 2026 00:46:19 +0000 (0:00:43.615) 0:01:33.924 ********** 2026-03-08 00:49:13.532478 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:49:13.532482 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:49:13.532485 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:49:13.532489 | orchestrator | 2026-03-08 00:49:13.532493 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-03-08 00:49:13.532497 | orchestrator | Sunday 08 March 2026 00:46:20 +0000 (0:00:00.348) 0:01:34.272 ********** 2026-03-08 00:49:13.532500 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:49:13.532504 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:49:13.532508 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:49:13.532512 | orchestrator | 2026-03-08 00:49:13.532515 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-03-08 00:49:13.532519 | orchestrator | Sunday 08 March 2026 00:46:21 +0000 (0:00:01.256) 0:01:35.528 ********** 2026-03-08 00:49:13.532523 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:49:13.532527 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:49:13.532530 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:49:13.532534 | orchestrator | 2026-03-08 00:49:13.532541 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-03-08 00:49:13.532545 | orchestrator | Sunday 08 March 2026 00:46:23 +0000 (0:00:01.990) 0:01:37.519 ********** 2026-03-08 00:49:13.532549 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:49:13.532552 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:49:13.532556 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:49:13.532560 | orchestrator | 2026-03-08 00:49:13.532563 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-03-08 00:49:13.532567 | orchestrator | Sunday 08 March 2026 00:46:48 +0000 (0:00:25.595) 0:02:03.114 ********** 2026-03-08 00:49:13.532571 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:49:13.532575 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:49:13.532578 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:49:13.532582 | orchestrator | 2026-03-08 00:49:13.532586 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-03-08 00:49:13.532590 | orchestrator | Sunday 08 March 2026 00:46:49 +0000 (0:00:00.746) 0:02:03.861 ********** 2026-03-08 00:49:13.532594 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:49:13.532597 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:49:13.532601 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:49:13.532605 | orchestrator | 2026-03-08 00:49:13.532608 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-03-08 00:49:13.532612 | orchestrator | Sunday 08 March 2026 00:46:50 +0000 (0:00:00.647) 0:02:04.508 ********** 2026-03-08 00:49:13.532616 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:49:13.532620 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:49:13.532623 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:49:13.532627 | orchestrator | 2026-03-08 00:49:13.532631 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-03-08 00:49:13.532635 | orchestrator | Sunday 08 March 2026 00:46:50 +0000 (0:00:00.645) 0:02:05.154 ********** 2026-03-08 00:49:13.532639 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:49:13.532642 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:49:13.532646 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:49:13.532650 | orchestrator | 2026-03-08 00:49:13.532654 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-03-08 00:49:13.532657 | orchestrator | Sunday 08 March 2026 00:46:52 +0000 (0:00:01.085) 0:02:06.239 ********** 2026-03-08 00:49:13.532661 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:49:13.532665 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:49:13.532669 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:49:13.532672 | orchestrator | 2026-03-08 00:49:13.532676 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-03-08 00:49:13.532680 | orchestrator | Sunday 08 March 2026 00:46:52 +0000 (0:00:00.310) 0:02:06.550 ********** 2026-03-08 00:49:13.532687 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:49:13.532691 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:49:13.532694 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:49:13.532698 | orchestrator | 2026-03-08 00:49:13.532706 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-03-08 00:49:13.532710 | orchestrator | Sunday 08 March 2026 00:46:53 +0000 (0:00:00.652) 0:02:07.202 ********** 2026-03-08 00:49:13.532713 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:49:13.532717 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:49:13.532721 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:49:13.532725 | orchestrator | 2026-03-08 00:49:13.532729 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-03-08 00:49:13.532732 | orchestrator | Sunday 08 March 2026 00:46:53 +0000 (0:00:00.633) 0:02:07.836 ********** 2026-03-08 00:49:13.532736 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:49:13.532740 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:49:13.532744 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:49:13.532747 | orchestrator | 2026-03-08 00:49:13.532751 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-03-08 00:49:13.532755 | orchestrator | Sunday 08 March 2026 00:46:54 +0000 (0:00:01.066) 0:02:08.903 ********** 2026-03-08 00:49:13.532759 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:49:13.532762 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:49:13.532766 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:49:13.532770 | orchestrator | 2026-03-08 00:49:13.532774 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-03-08 00:49:13.532777 | orchestrator | Sunday 08 March 2026 00:46:55 +0000 (0:00:00.771) 0:02:09.674 ********** 2026-03-08 00:49:13.532781 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:49:13.532785 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:49:13.532788 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:49:13.532792 | orchestrator | 2026-03-08 00:49:13.532796 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-03-08 00:49:13.532800 | orchestrator | Sunday 08 March 2026 00:46:55 +0000 (0:00:00.285) 0:02:09.960 ********** 2026-03-08 00:49:13.532803 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:49:13.532807 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:49:13.532811 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:49:13.532834 | orchestrator | 2026-03-08 00:49:13.532840 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-03-08 00:49:13.532846 | orchestrator | Sunday 08 March 2026 00:46:56 +0000 (0:00:00.284) 0:02:10.245 ********** 2026-03-08 00:49:13.532852 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:49:13.532858 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:49:13.532864 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:49:13.532870 | orchestrator | 2026-03-08 00:49:13.532876 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-03-08 00:49:13.532882 | orchestrator | Sunday 08 March 2026 00:46:57 +0000 (0:00:01.009) 0:02:11.255 ********** 2026-03-08 00:49:13.532888 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:49:13.532894 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:49:13.532901 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:49:13.532907 | orchestrator | 2026-03-08 00:49:13.532913 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-03-08 00:49:13.532919 | orchestrator | Sunday 08 March 2026 00:46:57 +0000 (0:00:00.711) 0:02:11.967 ********** 2026-03-08 00:49:13.532925 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-08 00:49:13.532935 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-08 00:49:13.532943 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-08 00:49:13.532958 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-08 00:49:13.532965 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-08 00:49:13.532971 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-08 00:49:13.532978 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-08 00:49:13.532985 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-08 00:49:13.532991 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-08 00:49:13.532998 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-08 00:49:13.533002 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-03-08 00:49:13.533006 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-08 00:49:13.533010 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-08 00:49:13.533013 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-03-08 00:49:13.533017 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-08 00:49:13.533021 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-08 00:49:13.533024 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-08 00:49:13.533028 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-08 00:49:13.533034 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-08 00:49:13.533040 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-08 00:49:13.533046 | orchestrator | 2026-03-08 00:49:13.533065 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-03-08 00:49:13.533072 | orchestrator | 2026-03-08 00:49:13.533078 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-03-08 00:49:13.533083 | orchestrator | Sunday 08 March 2026 00:47:01 +0000 (0:00:03.369) 0:02:15.336 ********** 2026-03-08 00:49:13.533089 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:49:13.533095 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:49:13.533101 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:49:13.533107 | orchestrator | 2026-03-08 00:49:13.533113 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-03-08 00:49:13.533119 | orchestrator | Sunday 08 March 2026 00:47:01 +0000 (0:00:00.536) 0:02:15.873 ********** 2026-03-08 00:49:13.533125 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:49:13.533131 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:49:13.533137 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:49:13.533141 | orchestrator | 2026-03-08 00:49:13.533145 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-03-08 00:49:13.533149 | orchestrator | Sunday 08 March 2026 00:47:02 +0000 (0:00:00.678) 0:02:16.551 ********** 2026-03-08 00:49:13.533153 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:49:13.533156 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:49:13.533160 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:49:13.533163 | orchestrator | 2026-03-08 00:49:13.533167 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-03-08 00:49:13.533171 | orchestrator | Sunday 08 March 2026 00:47:02 +0000 (0:00:00.347) 0:02:16.899 ********** 2026-03-08 00:49:13.533175 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 00:49:13.533179 | orchestrator | 2026-03-08 00:49:13.533182 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-03-08 00:49:13.533191 | orchestrator | Sunday 08 March 2026 00:47:03 +0000 (0:00:00.717) 0:02:17.616 ********** 2026-03-08 00:49:13.533195 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:49:13.533198 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:49:13.533202 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:49:13.533206 | orchestrator | 2026-03-08 00:49:13.533210 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-03-08 00:49:13.533213 | orchestrator | Sunday 08 March 2026 00:47:03 +0000 (0:00:00.314) 0:02:17.931 ********** 2026-03-08 00:49:13.533217 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:49:13.533221 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:49:13.533225 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:49:13.533228 | orchestrator | 2026-03-08 00:49:13.533232 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-03-08 00:49:13.533236 | orchestrator | Sunday 08 March 2026 00:47:04 +0000 (0:00:00.369) 0:02:18.301 ********** 2026-03-08 00:49:13.533239 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:49:13.533243 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:49:13.533247 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:49:13.533250 | orchestrator | 2026-03-08 00:49:13.533254 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-03-08 00:49:13.533258 | orchestrator | Sunday 08 March 2026 00:47:04 +0000 (0:00:00.359) 0:02:18.660 ********** 2026-03-08 00:49:13.533262 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:49:13.533265 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:49:13.533269 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:49:13.533273 | orchestrator | 2026-03-08 00:49:13.533280 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-03-08 00:49:13.533284 | orchestrator | Sunday 08 March 2026 00:47:05 +0000 (0:00:00.987) 0:02:19.647 ********** 2026-03-08 00:49:13.533288 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:49:13.533292 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:49:13.533296 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:49:13.533299 | orchestrator | 2026-03-08 00:49:13.533303 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-03-08 00:49:13.533307 | orchestrator | Sunday 08 March 2026 00:47:06 +0000 (0:00:01.231) 0:02:20.879 ********** 2026-03-08 00:49:13.533310 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:49:13.533314 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:49:13.533318 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:49:13.533321 | orchestrator | 2026-03-08 00:49:13.533325 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-03-08 00:49:13.533329 | orchestrator | Sunday 08 March 2026 00:47:08 +0000 (0:00:01.390) 0:02:22.270 ********** 2026-03-08 00:49:13.533333 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:49:13.533336 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:49:13.533340 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:49:13.533344 | orchestrator | 2026-03-08 00:49:13.533347 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-03-08 00:49:13.533351 | orchestrator | 2026-03-08 00:49:13.533355 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-03-08 00:49:13.533358 | orchestrator | Sunday 08 March 2026 00:47:18 +0000 (0:00:10.474) 0:02:32.744 ********** 2026-03-08 00:49:13.533362 | orchestrator | ok: [testbed-manager] 2026-03-08 00:49:13.533366 | orchestrator | 2026-03-08 00:49:13.533370 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-03-08 00:49:13.533373 | orchestrator | Sunday 08 March 2026 00:47:19 +0000 (0:00:00.883) 0:02:33.628 ********** 2026-03-08 00:49:13.533377 | orchestrator | changed: [testbed-manager] 2026-03-08 00:49:13.533381 | orchestrator | 2026-03-08 00:49:13.533384 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-08 00:49:13.533388 | orchestrator | Sunday 08 March 2026 00:47:19 +0000 (0:00:00.450) 0:02:34.079 ********** 2026-03-08 00:49:13.533396 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-08 00:49:13.533400 | orchestrator | 2026-03-08 00:49:13.533404 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-08 00:49:13.533408 | orchestrator | Sunday 08 March 2026 00:47:20 +0000 (0:00:00.629) 0:02:34.708 ********** 2026-03-08 00:49:13.533411 | orchestrator | changed: [testbed-manager] 2026-03-08 00:49:13.533415 | orchestrator | 2026-03-08 00:49:13.533419 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-03-08 00:49:13.533427 | orchestrator | Sunday 08 March 2026 00:47:21 +0000 (0:00:00.741) 0:02:35.450 ********** 2026-03-08 00:49:13.533430 | orchestrator | changed: [testbed-manager] 2026-03-08 00:49:13.533434 | orchestrator | 2026-03-08 00:49:13.533438 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-03-08 00:49:13.533442 | orchestrator | Sunday 08 March 2026 00:47:21 +0000 (0:00:00.601) 0:02:36.051 ********** 2026-03-08 00:49:13.533446 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-08 00:49:13.533449 | orchestrator | 2026-03-08 00:49:13.533456 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-03-08 00:49:13.533462 | orchestrator | Sunday 08 March 2026 00:47:23 +0000 (0:00:01.601) 0:02:37.653 ********** 2026-03-08 00:49:13.533467 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-08 00:49:13.533472 | orchestrator | 2026-03-08 00:49:13.533477 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-03-08 00:49:13.533482 | orchestrator | Sunday 08 March 2026 00:47:24 +0000 (0:00:00.794) 0:02:38.447 ********** 2026-03-08 00:49:13.533487 | orchestrator | changed: [testbed-manager] 2026-03-08 00:49:13.533492 | orchestrator | 2026-03-08 00:49:13.533503 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-03-08 00:49:13.533510 | orchestrator | Sunday 08 March 2026 00:47:24 +0000 (0:00:00.572) 0:02:39.019 ********** 2026-03-08 00:49:13.533517 | orchestrator | changed: [testbed-manager] 2026-03-08 00:49:13.533522 | orchestrator | 2026-03-08 00:49:13.533528 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-03-08 00:49:13.533533 | orchestrator | 2026-03-08 00:49:13.533538 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-03-08 00:49:13.533543 | orchestrator | Sunday 08 March 2026 00:47:25 +0000 (0:00:00.452) 0:02:39.471 ********** 2026-03-08 00:49:13.533548 | orchestrator | ok: [testbed-manager] 2026-03-08 00:49:13.533553 | orchestrator | 2026-03-08 00:49:13.533558 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-03-08 00:49:13.533563 | orchestrator | Sunday 08 March 2026 00:47:25 +0000 (0:00:00.152) 0:02:39.624 ********** 2026-03-08 00:49:13.533569 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-03-08 00:49:13.533574 | orchestrator | 2026-03-08 00:49:13.533580 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-03-08 00:49:13.533586 | orchestrator | Sunday 08 March 2026 00:47:25 +0000 (0:00:00.243) 0:02:39.867 ********** 2026-03-08 00:49:13.533591 | orchestrator | ok: [testbed-manager] 2026-03-08 00:49:13.533596 | orchestrator | 2026-03-08 00:49:13.533602 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-03-08 00:49:13.533607 | orchestrator | Sunday 08 March 2026 00:47:26 +0000 (0:00:01.243) 0:02:41.111 ********** 2026-03-08 00:49:13.533613 | orchestrator | ok: [testbed-manager] 2026-03-08 00:49:13.533619 | orchestrator | 2026-03-08 00:49:13.533625 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-03-08 00:49:13.533631 | orchestrator | Sunday 08 March 2026 00:47:28 +0000 (0:00:01.535) 0:02:42.647 ********** 2026-03-08 00:49:13.533637 | orchestrator | changed: [testbed-manager] 2026-03-08 00:49:13.533643 | orchestrator | 2026-03-08 00:49:13.533649 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-03-08 00:49:13.533655 | orchestrator | Sunday 08 March 2026 00:47:29 +0000 (0:00:00.756) 0:02:43.403 ********** 2026-03-08 00:49:13.533661 | orchestrator | ok: [testbed-manager] 2026-03-08 00:49:13.533677 | orchestrator | 2026-03-08 00:49:13.533687 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-03-08 00:49:13.533696 | orchestrator | Sunday 08 March 2026 00:47:29 +0000 (0:00:00.424) 0:02:43.827 ********** 2026-03-08 00:49:13.533704 | orchestrator | changed: [testbed-manager] 2026-03-08 00:49:13.533710 | orchestrator | 2026-03-08 00:49:13.533716 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-03-08 00:49:13.533721 | orchestrator | Sunday 08 March 2026 00:47:36 +0000 (0:00:06.952) 0:02:50.780 ********** 2026-03-08 00:49:13.533727 | orchestrator | changed: [testbed-manager] 2026-03-08 00:49:13.533732 | orchestrator | 2026-03-08 00:49:13.533738 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-03-08 00:49:13.533743 | orchestrator | Sunday 08 March 2026 00:47:49 +0000 (0:00:13.026) 0:03:03.806 ********** 2026-03-08 00:49:13.533749 | orchestrator | ok: [testbed-manager] 2026-03-08 00:49:13.533755 | orchestrator | 2026-03-08 00:49:13.533761 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-03-08 00:49:13.533768 | orchestrator | 2026-03-08 00:49:13.533774 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-03-08 00:49:13.533780 | orchestrator | Sunday 08 March 2026 00:47:50 +0000 (0:00:00.482) 0:03:04.288 ********** 2026-03-08 00:49:13.533786 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:49:13.533792 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:49:13.533798 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:49:13.533804 | orchestrator | 2026-03-08 00:49:13.533810 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-03-08 00:49:13.533859 | orchestrator | Sunday 08 March 2026 00:47:50 +0000 (0:00:00.410) 0:03:04.699 ********** 2026-03-08 00:49:13.533863 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:49:13.533866 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:49:13.533870 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:49:13.533874 | orchestrator | 2026-03-08 00:49:13.533878 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-03-08 00:49:13.533882 | orchestrator | Sunday 08 March 2026 00:47:50 +0000 (0:00:00.350) 0:03:05.050 ********** 2026-03-08 00:49:13.533886 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:49:13.533889 | orchestrator | 2026-03-08 00:49:13.533893 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-03-08 00:49:13.533897 | orchestrator | Sunday 08 March 2026 00:47:51 +0000 (0:00:00.731) 0:03:05.781 ********** 2026-03-08 00:49:13.533901 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-08 00:49:13.533905 | orchestrator | 2026-03-08 00:49:13.533908 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-03-08 00:49:13.533912 | orchestrator | Sunday 08 March 2026 00:47:52 +0000 (0:00:00.767) 0:03:06.548 ********** 2026-03-08 00:49:13.533920 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-08 00:49:13.533924 | orchestrator | 2026-03-08 00:49:13.533928 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-03-08 00:49:13.533932 | orchestrator | Sunday 08 March 2026 00:47:53 +0000 (0:00:01.112) 0:03:07.661 ********** 2026-03-08 00:49:13.533936 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:49:13.533939 | orchestrator | 2026-03-08 00:49:13.533943 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-03-08 00:49:13.533947 | orchestrator | Sunday 08 March 2026 00:47:53 +0000 (0:00:00.119) 0:03:07.781 ********** 2026-03-08 00:49:13.533950 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-08 00:49:13.533954 | orchestrator | 2026-03-08 00:49:13.533958 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-03-08 00:49:13.533962 | orchestrator | Sunday 08 March 2026 00:47:54 +0000 (0:00:00.944) 0:03:08.725 ********** 2026-03-08 00:49:13.533965 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:49:13.533969 | orchestrator | 2026-03-08 00:49:13.533973 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-03-08 00:49:13.533981 | orchestrator | Sunday 08 March 2026 00:47:54 +0000 (0:00:00.111) 0:03:08.836 ********** 2026-03-08 00:49:13.533985 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:49:13.533989 | orchestrator | 2026-03-08 00:49:13.533993 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-03-08 00:49:13.533996 | orchestrator | Sunday 08 March 2026 00:47:54 +0000 (0:00:00.120) 0:03:08.957 ********** 2026-03-08 00:49:13.534000 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:49:13.534004 | orchestrator | 2026-03-08 00:49:13.534008 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-03-08 00:49:13.534011 | orchestrator | Sunday 08 March 2026 00:47:54 +0000 (0:00:00.115) 0:03:09.072 ********** 2026-03-08 00:49:13.534084 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:49:13.534091 | orchestrator | 2026-03-08 00:49:13.534100 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-03-08 00:49:13.534107 | orchestrator | Sunday 08 March 2026 00:47:54 +0000 (0:00:00.098) 0:03:09.171 ********** 2026-03-08 00:49:13.534113 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-08 00:49:13.534119 | orchestrator | 2026-03-08 00:49:13.534125 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-03-08 00:49:13.534131 | orchestrator | Sunday 08 March 2026 00:47:59 +0000 (0:00:04.988) 0:03:14.160 ********** 2026-03-08 00:49:13.534137 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-03-08 00:49:13.534143 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-03-08 00:49:13.534150 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-03-08 00:49:13.534156 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-03-08 00:49:13.534162 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-03-08 00:49:13.534168 | orchestrator | 2026-03-08 00:49:13.534174 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-03-08 00:49:13.534180 | orchestrator | Sunday 08 March 2026 00:48:43 +0000 (0:00:43.023) 0:03:57.184 ********** 2026-03-08 00:49:13.534400 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-08 00:49:13.534475 | orchestrator | 2026-03-08 00:49:13.534487 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-03-08 00:49:13.534495 | orchestrator | Sunday 08 March 2026 00:48:44 +0000 (0:00:01.244) 0:03:58.428 ********** 2026-03-08 00:49:13.534503 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-08 00:49:13.534510 | orchestrator | 2026-03-08 00:49:13.534516 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-03-08 00:49:13.534522 | orchestrator | Sunday 08 March 2026 00:48:45 +0000 (0:00:01.647) 0:04:00.076 ********** 2026-03-08 00:49:13.534529 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-08 00:49:13.534535 | orchestrator | 2026-03-08 00:49:13.534541 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-03-08 00:49:13.534547 | orchestrator | Sunday 08 March 2026 00:48:47 +0000 (0:00:01.190) 0:04:01.267 ********** 2026-03-08 00:49:13.534570 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:49:13.534577 | orchestrator | 2026-03-08 00:49:13.534583 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-03-08 00:49:13.534589 | orchestrator | Sunday 08 March 2026 00:48:47 +0000 (0:00:00.157) 0:04:01.424 ********** 2026-03-08 00:49:13.534596 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-03-08 00:49:13.534601 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-03-08 00:49:13.534605 | orchestrator | 2026-03-08 00:49:13.534609 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-03-08 00:49:13.534612 | orchestrator | Sunday 08 March 2026 00:48:49 +0000 (0:00:02.248) 0:04:03.672 ********** 2026-03-08 00:49:13.534617 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:49:13.534637 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:49:13.534642 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:49:13.534656 | orchestrator | 2026-03-08 00:49:13.534660 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-03-08 00:49:13.534670 | orchestrator | Sunday 08 March 2026 00:48:49 +0000 (0:00:00.421) 0:04:04.093 ********** 2026-03-08 00:49:13.534674 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:49:13.534679 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:49:13.534683 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:49:13.534687 | orchestrator | 2026-03-08 00:49:13.534691 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-03-08 00:49:13.534695 | orchestrator | 2026-03-08 00:49:13.534698 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-03-08 00:49:13.534702 | orchestrator | Sunday 08 March 2026 00:48:51 +0000 (0:00:01.291) 0:04:05.385 ********** 2026-03-08 00:49:13.534706 | orchestrator | ok: [testbed-manager] 2026-03-08 00:49:13.534710 | orchestrator | 2026-03-08 00:49:13.534721 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-03-08 00:49:13.534725 | orchestrator | Sunday 08 March 2026 00:48:51 +0000 (0:00:00.134) 0:04:05.520 ********** 2026-03-08 00:49:13.534729 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-03-08 00:49:13.534733 | orchestrator | 2026-03-08 00:49:13.534737 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-03-08 00:49:13.534741 | orchestrator | Sunday 08 March 2026 00:48:51 +0000 (0:00:00.230) 0:04:05.750 ********** 2026-03-08 00:49:13.534745 | orchestrator | changed: [testbed-manager] 2026-03-08 00:49:13.534749 | orchestrator | 2026-03-08 00:49:13.534753 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-03-08 00:49:13.534757 | orchestrator | 2026-03-08 00:49:13.534761 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-03-08 00:49:13.534765 | orchestrator | Sunday 08 March 2026 00:48:57 +0000 (0:00:06.054) 0:04:11.804 ********** 2026-03-08 00:49:13.534769 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:49:13.534773 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:49:13.534777 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:49:13.534781 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:49:13.534785 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:49:13.534789 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:49:13.534793 | orchestrator | 2026-03-08 00:49:13.534797 | orchestrator | TASK [Manage labels] *********************************************************** 2026-03-08 00:49:13.534802 | orchestrator | Sunday 08 March 2026 00:48:59 +0000 (0:00:01.691) 0:04:13.495 ********** 2026-03-08 00:49:13.534806 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-08 00:49:13.534810 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-08 00:49:13.534876 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-08 00:49:13.534884 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-08 00:49:13.534890 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-08 00:49:13.534896 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-08 00:49:13.534903 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-08 00:49:13.534909 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-08 00:49:13.534915 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-08 00:49:13.534922 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-08 00:49:13.534927 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-08 00:49:13.534933 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-08 00:49:13.534959 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-08 00:49:13.534971 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-08 00:49:13.534978 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-08 00:49:13.534984 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-08 00:49:13.534991 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-08 00:49:13.534999 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-08 00:49:13.535006 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-08 00:49:13.535013 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-08 00:49:13.535021 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-08 00:49:13.535027 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-08 00:49:13.535034 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-08 00:49:13.535041 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-08 00:49:13.535048 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-08 00:49:13.535055 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-08 00:49:13.535061 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-08 00:49:13.535068 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-08 00:49:13.535075 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-08 00:49:13.535082 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-08 00:49:13.535089 | orchestrator | 2026-03-08 00:49:13.535096 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-03-08 00:49:13.535103 | orchestrator | Sunday 08 March 2026 00:49:11 +0000 (0:00:12.237) 0:04:25.732 ********** 2026-03-08 00:49:13.535109 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:49:13.535116 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:49:13.535123 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:49:13.535130 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:49:13.535145 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:49:13.535153 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:49:13.535160 | orchestrator | 2026-03-08 00:49:13.535167 | orchestrator | TASK [Manage taints] *********************************************************** 2026-03-08 00:49:13.535175 | orchestrator | Sunday 08 March 2026 00:49:12 +0000 (0:00:00.680) 0:04:26.413 ********** 2026-03-08 00:49:13.535182 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:49:13.535189 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:49:13.535197 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:49:13.535205 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:49:13.535211 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:49:13.535218 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:49:13.535226 | orchestrator | 2026-03-08 00:49:13.535232 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:49:13.535240 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:49:13.535250 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-03-08 00:49:13.535259 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-08 00:49:13.535275 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-08 00:49:13.535283 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-08 00:49:13.535291 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-08 00:49:13.535298 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-08 00:49:13.535305 | orchestrator | 2026-03-08 00:49:13.535312 | orchestrator | 2026-03-08 00:49:13.535319 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 00:49:13.535326 | orchestrator | Sunday 08 March 2026 00:49:12 +0000 (0:00:00.472) 0:04:26.886 ********** 2026-03-08 00:49:13.535333 | orchestrator | =============================================================================== 2026-03-08 00:49:13.535340 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 43.62s 2026-03-08 00:49:13.535347 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 43.02s 2026-03-08 00:49:13.535353 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 25.60s 2026-03-08 00:49:13.535369 | orchestrator | kubectl : Install required packages ------------------------------------ 13.03s 2026-03-08 00:49:13.535377 | orchestrator | Manage labels ---------------------------------------------------------- 12.24s 2026-03-08 00:49:13.535384 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 10.47s 2026-03-08 00:49:13.535391 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 6.95s 2026-03-08 00:49:13.535399 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 6.05s 2026-03-08 00:49:13.535406 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.51s 2026-03-08 00:49:13.535413 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 4.99s 2026-03-08 00:49:13.535420 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 4.42s 2026-03-08 00:49:13.535424 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.37s 2026-03-08 00:49:13.535428 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 2.97s 2026-03-08 00:49:13.535432 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 2.48s 2026-03-08 00:49:13.535436 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 2.25s 2026-03-08 00:49:13.535440 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.24s 2026-03-08 00:49:13.535444 | orchestrator | k3s_custom_registries : Create directory /etc/rancher/k3s --------------- 2.17s 2026-03-08 00:49:13.535448 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 2.02s 2026-03-08 00:49:13.535452 | orchestrator | k3s_server : Copy K3s service file -------------------------------------- 1.99s 2026-03-08 00:49:13.535456 | orchestrator | k3s_server : Create custom resolv.conf for k3s -------------------------- 1.95s 2026-03-08 00:49:13.536303 | orchestrator | 2026-03-08 00:49:13 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:49:13.538195 | orchestrator | 2026-03-08 00:49:13 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:49:13.539908 | orchestrator | 2026-03-08 00:49:13 | INFO  | Task 20c38e53-9724-4435-b3ed-dca92c029193 is in state STARTED 2026-03-08 00:49:13.541076 | orchestrator | 2026-03-08 00:49:13 | INFO  | Task 0090c2b7-ff53-4ec8-8b6d-3acf888e2829 is in state STARTED 2026-03-08 00:49:13.541200 | orchestrator | 2026-03-08 00:49:13 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:49:16.620392 | orchestrator | 2026-03-08 00:49:16 | INFO  | Task 7f505f89-1c84-42fc-acfa-dd5edd02d933 is in state STARTED 2026-03-08 00:49:16.624133 | orchestrator | 2026-03-08 00:49:16 | INFO  | Task 6c639979-f2bc-4a90-917b-0ea56eb44d6e is in state STARTED 2026-03-08 00:49:16.624660 | orchestrator | 2026-03-08 00:49:16 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:49:16.626160 | orchestrator | 2026-03-08 00:49:16 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:49:16.629582 | orchestrator | 2026-03-08 00:49:16 | INFO  | Task 20c38e53-9724-4435-b3ed-dca92c029193 is in state STARTED 2026-03-08 00:49:16.630984 | orchestrator | 2026-03-08 00:49:16 | INFO  | Task 0090c2b7-ff53-4ec8-8b6d-3acf888e2829 is in state STARTED 2026-03-08 00:49:16.631049 | orchestrator | 2026-03-08 00:49:16 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:49:19.677857 | orchestrator | 2026-03-08 00:49:19 | INFO  | Task 7f505f89-1c84-42fc-acfa-dd5edd02d933 is in state STARTED 2026-03-08 00:49:19.680998 | orchestrator | 2026-03-08 00:49:19 | INFO  | Task 6c639979-f2bc-4a90-917b-0ea56eb44d6e is in state STARTED 2026-03-08 00:49:19.684872 | orchestrator | 2026-03-08 00:49:19 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:49:19.687836 | orchestrator | 2026-03-08 00:49:19 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:49:19.690404 | orchestrator | 2026-03-08 00:49:19 | INFO  | Task 20c38e53-9724-4435-b3ed-dca92c029193 is in state STARTED 2026-03-08 00:49:19.692979 | orchestrator | 2026-03-08 00:49:19 | INFO  | Task 0090c2b7-ff53-4ec8-8b6d-3acf888e2829 is in state STARTED 2026-03-08 00:49:19.693039 | orchestrator | 2026-03-08 00:49:19 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:49:22.737339 | orchestrator | 2026-03-08 00:49:22 | INFO  | Task 7f505f89-1c84-42fc-acfa-dd5edd02d933 is in state STARTED 2026-03-08 00:49:22.737425 | orchestrator | 2026-03-08 00:49:22 | INFO  | Task 6c639979-f2bc-4a90-917b-0ea56eb44d6e is in state SUCCESS 2026-03-08 00:49:22.737947 | orchestrator | 2026-03-08 00:49:22 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:49:22.739182 | orchestrator | 2026-03-08 00:49:22 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:49:22.739846 | orchestrator | 2026-03-08 00:49:22 | INFO  | Task 20c38e53-9724-4435-b3ed-dca92c029193 is in state STARTED 2026-03-08 00:49:22.741284 | orchestrator | 2026-03-08 00:49:22 | INFO  | Task 0090c2b7-ff53-4ec8-8b6d-3acf888e2829 is in state STARTED 2026-03-08 00:49:22.741334 | orchestrator | 2026-03-08 00:49:22 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:49:25.787535 | orchestrator | 2026-03-08 00:49:25 | INFO  | Task 7f505f89-1c84-42fc-acfa-dd5edd02d933 is in state STARTED 2026-03-08 00:49:25.787631 | orchestrator | 2026-03-08 00:49:25 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:49:25.788223 | orchestrator | 2026-03-08 00:49:25 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:49:25.788981 | orchestrator | 2026-03-08 00:49:25 | INFO  | Task 20c38e53-9724-4435-b3ed-dca92c029193 is in state STARTED 2026-03-08 00:49:25.789606 | orchestrator | 2026-03-08 00:49:25 | INFO  | Task 0090c2b7-ff53-4ec8-8b6d-3acf888e2829 is in state STARTED 2026-03-08 00:49:25.789624 | orchestrator | 2026-03-08 00:49:25 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:49:28.821020 | orchestrator | 2026-03-08 00:49:28 | INFO  | Task 7f505f89-1c84-42fc-acfa-dd5edd02d933 is in state SUCCESS 2026-03-08 00:49:28.822407 | orchestrator | 2026-03-08 00:49:28 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:49:28.823080 | orchestrator | 2026-03-08 00:49:28 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:49:28.823958 | orchestrator | 2026-03-08 00:49:28 | INFO  | Task 20c38e53-9724-4435-b3ed-dca92c029193 is in state STARTED 2026-03-08 00:49:28.824863 | orchestrator | 2026-03-08 00:49:28 | INFO  | Task 0090c2b7-ff53-4ec8-8b6d-3acf888e2829 is in state STARTED 2026-03-08 00:49:28.824905 | orchestrator | 2026-03-08 00:49:28 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:49:31.856437 | orchestrator | 2026-03-08 00:49:31 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:49:31.856539 | orchestrator | 2026-03-08 00:49:31 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:49:31.857083 | orchestrator | 2026-03-08 00:49:31 | INFO  | Task 20c38e53-9724-4435-b3ed-dca92c029193 is in state STARTED 2026-03-08 00:49:31.858457 | orchestrator | 2026-03-08 00:49:31 | INFO  | Task 0090c2b7-ff53-4ec8-8b6d-3acf888e2829 is in state STARTED 2026-03-08 00:49:31.858491 | orchestrator | 2026-03-08 00:49:31 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:49:34.900611 | orchestrator | 2026-03-08 00:49:34 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:49:34.902257 | orchestrator | 2026-03-08 00:49:34 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:49:34.904222 | orchestrator | 2026-03-08 00:49:34 | INFO  | Task 20c38e53-9724-4435-b3ed-dca92c029193 is in state STARTED 2026-03-08 00:49:34.905310 | orchestrator | 2026-03-08 00:49:34 | INFO  | Task 0090c2b7-ff53-4ec8-8b6d-3acf888e2829 is in state STARTED 2026-03-08 00:49:34.905589 | orchestrator | 2026-03-08 00:49:34 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:49:37.939891 | orchestrator | 2026-03-08 00:49:37 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:49:37.940877 | orchestrator | 2026-03-08 00:49:37 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:49:37.942295 | orchestrator | 2026-03-08 00:49:37 | INFO  | Task 20c38e53-9724-4435-b3ed-dca92c029193 is in state STARTED 2026-03-08 00:49:37.944354 | orchestrator | 2026-03-08 00:49:37 | INFO  | Task 0090c2b7-ff53-4ec8-8b6d-3acf888e2829 is in state STARTED 2026-03-08 00:49:37.944451 | orchestrator | 2026-03-08 00:49:37 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:49:40.990284 | orchestrator | 2026-03-08 00:49:40 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:49:40.992164 | orchestrator | 2026-03-08 00:49:40 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:49:40.994117 | orchestrator | 2026-03-08 00:49:40 | INFO  | Task 20c38e53-9724-4435-b3ed-dca92c029193 is in state STARTED 2026-03-08 00:49:40.995254 | orchestrator | 2026-03-08 00:49:40 | INFO  | Task 0090c2b7-ff53-4ec8-8b6d-3acf888e2829 is in state STARTED 2026-03-08 00:49:40.995285 | orchestrator | 2026-03-08 00:49:40 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:49:44.099661 | orchestrator | 2026-03-08 00:49:44 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:49:44.099981 | orchestrator | 2026-03-08 00:49:44 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:49:44.100794 | orchestrator | 2026-03-08 00:49:44 | INFO  | Task 20c38e53-9724-4435-b3ed-dca92c029193 is in state STARTED 2026-03-08 00:49:44.103813 | orchestrator | 2026-03-08 00:49:44 | INFO  | Task 0090c2b7-ff53-4ec8-8b6d-3acf888e2829 is in state STARTED 2026-03-08 00:49:44.103874 | orchestrator | 2026-03-08 00:49:44 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:49:47.142269 | orchestrator | 2026-03-08 00:49:47 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:49:47.142671 | orchestrator | 2026-03-08 00:49:47 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:49:47.144102 | orchestrator | 2026-03-08 00:49:47 | INFO  | Task 20c38e53-9724-4435-b3ed-dca92c029193 is in state STARTED 2026-03-08 00:49:47.145962 | orchestrator | 2026-03-08 00:49:47 | INFO  | Task 0090c2b7-ff53-4ec8-8b6d-3acf888e2829 is in state STARTED 2026-03-08 00:49:47.146073 | orchestrator | 2026-03-08 00:49:47 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:49:50.194139 | orchestrator | 2026-03-08 00:49:50 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:49:50.196498 | orchestrator | 2026-03-08 00:49:50 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:49:50.197406 | orchestrator | 2026-03-08 00:49:50 | INFO  | Task 20c38e53-9724-4435-b3ed-dca92c029193 is in state STARTED 2026-03-08 00:49:50.198803 | orchestrator | 2026-03-08 00:49:50 | INFO  | Task 0090c2b7-ff53-4ec8-8b6d-3acf888e2829 is in state SUCCESS 2026-03-08 00:49:50.198938 | orchestrator | 2026-03-08 00:49:50.198953 | orchestrator | 2026-03-08 00:49:50.198961 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-03-08 00:49:50.198968 | orchestrator | 2026-03-08 00:49:50.198985 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-08 00:49:50.198998 | orchestrator | Sunday 08 March 2026 00:49:18 +0000 (0:00:00.195) 0:00:00.195 ********** 2026-03-08 00:49:50.199012 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-08 00:49:50.199024 | orchestrator | 2026-03-08 00:49:50.199030 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-08 00:49:50.199037 | orchestrator | Sunday 08 March 2026 00:49:19 +0000 (0:00:00.929) 0:00:01.125 ********** 2026-03-08 00:49:50.199043 | orchestrator | changed: [testbed-manager] 2026-03-08 00:49:50.199049 | orchestrator | 2026-03-08 00:49:50.199056 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-03-08 00:49:50.199062 | orchestrator | Sunday 08 March 2026 00:49:20 +0000 (0:00:01.230) 0:00:02.356 ********** 2026-03-08 00:49:50.199068 | orchestrator | changed: [testbed-manager] 2026-03-08 00:49:50.199074 | orchestrator | 2026-03-08 00:49:50.199081 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:49:50.199110 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:49:50.199118 | orchestrator | 2026-03-08 00:49:50.199124 | orchestrator | 2026-03-08 00:49:50.199131 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 00:49:50.199137 | orchestrator | Sunday 08 March 2026 00:49:20 +0000 (0:00:00.546) 0:00:02.902 ********** 2026-03-08 00:49:50.199143 | orchestrator | =============================================================================== 2026-03-08 00:49:50.199149 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.23s 2026-03-08 00:49:50.199155 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.93s 2026-03-08 00:49:50.199162 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.55s 2026-03-08 00:49:50.199168 | orchestrator | 2026-03-08 00:49:50.199174 | orchestrator | 2026-03-08 00:49:50.199180 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-03-08 00:49:50.199187 | orchestrator | 2026-03-08 00:49:50.199207 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-03-08 00:49:50.199214 | orchestrator | Sunday 08 March 2026 00:49:18 +0000 (0:00:00.180) 0:00:00.180 ********** 2026-03-08 00:49:50.199221 | orchestrator | ok: [testbed-manager] 2026-03-08 00:49:50.199227 | orchestrator | 2026-03-08 00:49:50.199233 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-03-08 00:49:50.199248 | orchestrator | Sunday 08 March 2026 00:49:18 +0000 (0:00:00.614) 0:00:00.794 ********** 2026-03-08 00:49:50.199254 | orchestrator | ok: [testbed-manager] 2026-03-08 00:49:50.199260 | orchestrator | 2026-03-08 00:49:50.199266 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-08 00:49:50.199273 | orchestrator | Sunday 08 March 2026 00:49:19 +0000 (0:00:00.634) 0:00:01.429 ********** 2026-03-08 00:49:50.199279 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-08 00:49:50.199285 | orchestrator | 2026-03-08 00:49:50.199291 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-08 00:49:50.199297 | orchestrator | Sunday 08 March 2026 00:49:20 +0000 (0:00:00.765) 0:00:02.194 ********** 2026-03-08 00:49:50.199303 | orchestrator | changed: [testbed-manager] 2026-03-08 00:49:50.199309 | orchestrator | 2026-03-08 00:49:50.199335 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-03-08 00:49:50.199355 | orchestrator | Sunday 08 March 2026 00:49:21 +0000 (0:00:01.621) 0:00:03.816 ********** 2026-03-08 00:49:50.199365 | orchestrator | changed: [testbed-manager] 2026-03-08 00:49:50.199375 | orchestrator | 2026-03-08 00:49:50.199386 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-03-08 00:49:50.199396 | orchestrator | Sunday 08 March 2026 00:49:22 +0000 (0:00:00.634) 0:00:04.450 ********** 2026-03-08 00:49:50.199404 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-08 00:49:50.199411 | orchestrator | 2026-03-08 00:49:50.199417 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-03-08 00:49:50.199423 | orchestrator | Sunday 08 March 2026 00:49:24 +0000 (0:00:01.771) 0:00:06.222 ********** 2026-03-08 00:49:50.199429 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-08 00:49:50.199435 | orchestrator | 2026-03-08 00:49:50.199441 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-03-08 00:49:50.199447 | orchestrator | Sunday 08 March 2026 00:49:25 +0000 (0:00:00.909) 0:00:07.131 ********** 2026-03-08 00:49:50.199454 | orchestrator | ok: [testbed-manager] 2026-03-08 00:49:50.199460 | orchestrator | 2026-03-08 00:49:50.199468 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-03-08 00:49:50.199475 | orchestrator | Sunday 08 March 2026 00:49:25 +0000 (0:00:00.451) 0:00:07.582 ********** 2026-03-08 00:49:50.199482 | orchestrator | ok: [testbed-manager] 2026-03-08 00:49:50.199489 | orchestrator | 2026-03-08 00:49:50.199496 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:49:50.199504 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:49:50.199593 | orchestrator | 2026-03-08 00:49:50.199604 | orchestrator | 2026-03-08 00:49:50.199612 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 00:49:50.199621 | orchestrator | Sunday 08 March 2026 00:49:25 +0000 (0:00:00.351) 0:00:07.934 ********** 2026-03-08 00:49:50.199629 | orchestrator | =============================================================================== 2026-03-08 00:49:50.199637 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.77s 2026-03-08 00:49:50.199645 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.62s 2026-03-08 00:49:50.199654 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.91s 2026-03-08 00:49:50.199671 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.77s 2026-03-08 00:49:50.199681 | orchestrator | Create .kube directory -------------------------------------------------- 0.63s 2026-03-08 00:49:50.199698 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.63s 2026-03-08 00:49:50.199711 | orchestrator | Get home directory of operator user ------------------------------------- 0.61s 2026-03-08 00:49:50.199738 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.45s 2026-03-08 00:49:50.199752 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.35s 2026-03-08 00:49:50.199765 | orchestrator | 2026-03-08 00:49:50.199980 | orchestrator | 2026-03-08 00:49:50.200000 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2026-03-08 00:49:50.200013 | orchestrator | 2026-03-08 00:49:50.200025 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-03-08 00:49:50.200038 | orchestrator | Sunday 08 March 2026 00:47:27 +0000 (0:00:00.073) 0:00:00.073 ********** 2026-03-08 00:49:50.200047 | orchestrator | ok: [localhost] => { 2026-03-08 00:49:50.200054 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2026-03-08 00:49:50.200062 | orchestrator | } 2026-03-08 00:49:50.200069 | orchestrator | 2026-03-08 00:49:50.200076 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2026-03-08 00:49:50.200083 | orchestrator | Sunday 08 March 2026 00:47:27 +0000 (0:00:00.039) 0:00:00.112 ********** 2026-03-08 00:49:50.200091 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2026-03-08 00:49:50.200099 | orchestrator | ...ignoring 2026-03-08 00:49:50.200106 | orchestrator | 2026-03-08 00:49:50.200113 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2026-03-08 00:49:50.200120 | orchestrator | Sunday 08 March 2026 00:47:30 +0000 (0:00:02.816) 0:00:02.929 ********** 2026-03-08 00:49:50.200127 | orchestrator | skipping: [localhost] 2026-03-08 00:49:50.200135 | orchestrator | 2026-03-08 00:49:50.200142 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2026-03-08 00:49:50.200149 | orchestrator | Sunday 08 March 2026 00:47:30 +0000 (0:00:00.137) 0:00:03.067 ********** 2026-03-08 00:49:50.200156 | orchestrator | ok: [localhost] 2026-03-08 00:49:50.200163 | orchestrator | 2026-03-08 00:49:50.200170 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-08 00:49:50.200177 | orchestrator | 2026-03-08 00:49:50.200184 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-08 00:49:50.200197 | orchestrator | Sunday 08 March 2026 00:47:30 +0000 (0:00:00.411) 0:00:03.478 ********** 2026-03-08 00:49:50.200204 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:49:50.200211 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:49:50.200218 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:49:50.200225 | orchestrator | 2026-03-08 00:49:50.200232 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-08 00:49:50.200239 | orchestrator | Sunday 08 March 2026 00:47:31 +0000 (0:00:00.675) 0:00:04.154 ********** 2026-03-08 00:49:50.200246 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-03-08 00:49:50.200254 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-03-08 00:49:50.200261 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-03-08 00:49:50.200268 | orchestrator | 2026-03-08 00:49:50.200275 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-03-08 00:49:50.200282 | orchestrator | 2026-03-08 00:49:50.200289 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-08 00:49:50.200296 | orchestrator | Sunday 08 March 2026 00:47:32 +0000 (0:00:01.189) 0:00:05.343 ********** 2026-03-08 00:49:50.200303 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:49:50.200311 | orchestrator | 2026-03-08 00:49:50.200318 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-08 00:49:50.200325 | orchestrator | Sunday 08 March 2026 00:47:33 +0000 (0:00:01.101) 0:00:06.445 ********** 2026-03-08 00:49:50.200339 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:49:50.200346 | orchestrator | 2026-03-08 00:49:50.200353 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-03-08 00:49:50.200360 | orchestrator | Sunday 08 March 2026 00:47:34 +0000 (0:00:01.047) 0:00:07.492 ********** 2026-03-08 00:49:50.200367 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:49:50.200375 | orchestrator | 2026-03-08 00:49:50.200382 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-03-08 00:49:50.200389 | orchestrator | Sunday 08 March 2026 00:47:35 +0000 (0:00:00.428) 0:00:07.921 ********** 2026-03-08 00:49:50.200396 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:49:50.200403 | orchestrator | 2026-03-08 00:49:50.200410 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-03-08 00:49:50.200417 | orchestrator | Sunday 08 March 2026 00:47:35 +0000 (0:00:00.364) 0:00:08.286 ********** 2026-03-08 00:49:50.200424 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:49:50.200431 | orchestrator | 2026-03-08 00:49:50.200438 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-03-08 00:49:50.200445 | orchestrator | Sunday 08 March 2026 00:47:35 +0000 (0:00:00.354) 0:00:08.640 ********** 2026-03-08 00:49:50.200452 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:49:50.200459 | orchestrator | 2026-03-08 00:49:50.200466 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-08 00:49:50.200473 | orchestrator | Sunday 08 March 2026 00:47:36 +0000 (0:00:00.678) 0:00:09.319 ********** 2026-03-08 00:49:50.200480 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:49:50.200487 | orchestrator | 2026-03-08 00:49:50.200494 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-08 00:49:50.200501 | orchestrator | Sunday 08 March 2026 00:47:37 +0000 (0:00:00.737) 0:00:10.056 ********** 2026-03-08 00:49:50.200508 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:49:50.200515 | orchestrator | 2026-03-08 00:49:50.200522 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-03-08 00:49:50.200529 | orchestrator | Sunday 08 March 2026 00:47:38 +0000 (0:00:00.899) 0:00:10.955 ********** 2026-03-08 00:49:50.200536 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:49:50.200543 | orchestrator | 2026-03-08 00:49:50.200550 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-03-08 00:49:50.200558 | orchestrator | Sunday 08 March 2026 00:47:38 +0000 (0:00:00.380) 0:00:11.335 ********** 2026-03-08 00:49:50.200565 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:49:50.200572 | orchestrator | 2026-03-08 00:49:50.200589 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-03-08 00:49:50.200598 | orchestrator | Sunday 08 March 2026 00:47:38 +0000 (0:00:00.371) 0:00:11.707 ********** 2026-03-08 00:49:50.200611 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-08 00:49:50.200627 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-08 00:49:50.200642 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-08 00:49:50.200652 | orchestrator | 2026-03-08 00:49:50.200660 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-03-08 00:49:50.200669 | orchestrator | Sunday 08 March 2026 00:47:40 +0000 (0:00:01.396) 0:00:13.104 ********** 2026-03-08 00:49:50.200683 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-08 00:49:50.200697 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-08 00:49:50.200712 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-08 00:49:50.200782 | orchestrator | 2026-03-08 00:49:50.200794 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-03-08 00:49:50.200803 | orchestrator | Sunday 08 March 2026 00:47:43 +0000 (0:00:02.969) 0:00:16.073 ********** 2026-03-08 00:49:50.200812 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-08 00:49:50.200821 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-08 00:49:50.200829 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-08 00:49:50.200837 | orchestrator | 2026-03-08 00:49:50.200844 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-03-08 00:49:50.200852 | orchestrator | Sunday 08 March 2026 00:47:44 +0000 (0:00:01.463) 0:00:17.537 ********** 2026-03-08 00:49:50.200859 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-08 00:49:50.200866 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-08 00:49:50.200873 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-08 00:49:50.200880 | orchestrator | 2026-03-08 00:49:50.200887 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-03-08 00:49:50.200894 | orchestrator | Sunday 08 March 2026 00:47:47 +0000 (0:00:02.599) 0:00:20.137 ********** 2026-03-08 00:49:50.200902 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-08 00:49:50.200909 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-08 00:49:50.200916 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-08 00:49:50.200923 | orchestrator | 2026-03-08 00:49:50.200930 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-03-08 00:49:50.200937 | orchestrator | Sunday 08 March 2026 00:47:49 +0000 (0:00:01.731) 0:00:21.868 ********** 2026-03-08 00:49:50.200949 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-08 00:49:50.200957 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-08 00:49:50.200964 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-08 00:49:50.200972 | orchestrator | 2026-03-08 00:49:50.200985 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-03-08 00:49:50.200992 | orchestrator | Sunday 08 March 2026 00:47:52 +0000 (0:00:03.478) 0:00:25.347 ********** 2026-03-08 00:49:50.200999 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-08 00:49:50.201006 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-08 00:49:50.201013 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-08 00:49:50.201021 | orchestrator | 2026-03-08 00:49:50.201028 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-03-08 00:49:50.201035 | orchestrator | Sunday 08 March 2026 00:47:54 +0000 (0:00:02.416) 0:00:27.763 ********** 2026-03-08 00:49:50.201042 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-08 00:49:50.201049 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-08 00:49:50.201057 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-08 00:49:50.201064 | orchestrator | 2026-03-08 00:49:50.201071 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-08 00:49:50.201082 | orchestrator | Sunday 08 March 2026 00:47:56 +0000 (0:00:01.797) 0:00:29.561 ********** 2026-03-08 00:49:50.201089 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:49:50.201096 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:49:50.201104 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:49:50.201111 | orchestrator | 2026-03-08 00:49:50.201118 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2026-03-08 00:49:50.201125 | orchestrator | Sunday 08 March 2026 00:47:57 +0000 (0:00:00.404) 0:00:29.965 ********** 2026-03-08 00:49:50.201133 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-08 00:49:50.201142 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-08 00:49:50.201160 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-08 00:49:50.201168 | orchestrator | 2026-03-08 00:49:50.201176 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-03-08 00:49:50.201183 | orchestrator | Sunday 08 March 2026 00:47:58 +0000 (0:00:01.591) 0:00:31.556 ********** 2026-03-08 00:49:50.201190 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:49:50.201197 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:49:50.201204 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:49:50.201211 | orchestrator | 2026-03-08 00:49:50.201218 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-03-08 00:49:50.201225 | orchestrator | Sunday 08 March 2026 00:47:59 +0000 (0:00:01.041) 0:00:32.597 ********** 2026-03-08 00:49:50.201232 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:49:50.201240 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:49:50.201247 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:49:50.201254 | orchestrator | 2026-03-08 00:49:50.201261 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-03-08 00:49:50.201271 | orchestrator | Sunday 08 March 2026 00:48:07 +0000 (0:00:07.833) 0:00:40.431 ********** 2026-03-08 00:49:50.201279 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:49:50.201286 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:49:50.201293 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:49:50.201300 | orchestrator | 2026-03-08 00:49:50.201307 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-08 00:49:50.201314 | orchestrator | 2026-03-08 00:49:50.201321 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-08 00:49:50.201328 | orchestrator | Sunday 08 March 2026 00:48:08 +0000 (0:00:00.622) 0:00:41.054 ********** 2026-03-08 00:49:50.201336 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:49:50.201343 | orchestrator | 2026-03-08 00:49:50.201350 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-08 00:49:50.201357 | orchestrator | Sunday 08 March 2026 00:48:08 +0000 (0:00:00.669) 0:00:41.723 ********** 2026-03-08 00:49:50.201365 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:49:50.201372 | orchestrator | 2026-03-08 00:49:50.201379 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-08 00:49:50.201386 | orchestrator | Sunday 08 March 2026 00:48:09 +0000 (0:00:00.363) 0:00:42.087 ********** 2026-03-08 00:49:50.201393 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:49:50.201401 | orchestrator | 2026-03-08 00:49:50.201408 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-08 00:49:50.201415 | orchestrator | Sunday 08 March 2026 00:48:11 +0000 (0:00:02.161) 0:00:44.249 ********** 2026-03-08 00:49:50.201422 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:49:50.201429 | orchestrator | 2026-03-08 00:49:50.201436 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-08 00:49:50.201444 | orchestrator | 2026-03-08 00:49:50.201451 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-08 00:49:50.201462 | orchestrator | Sunday 08 March 2026 00:49:04 +0000 (0:00:53.377) 0:01:37.627 ********** 2026-03-08 00:49:50.201469 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:49:50.201476 | orchestrator | 2026-03-08 00:49:50.201483 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-08 00:49:50.201490 | orchestrator | Sunday 08 March 2026 00:49:05 +0000 (0:00:00.727) 0:01:38.354 ********** 2026-03-08 00:49:50.201497 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:49:50.201510 | orchestrator | 2026-03-08 00:49:50.201523 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-08 00:49:50.201534 | orchestrator | Sunday 08 March 2026 00:49:05 +0000 (0:00:00.289) 0:01:38.643 ********** 2026-03-08 00:49:50.201548 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:49:50.201561 | orchestrator | 2026-03-08 00:49:50.201574 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-08 00:49:50.201584 | orchestrator | Sunday 08 March 2026 00:49:08 +0000 (0:00:02.523) 0:01:41.167 ********** 2026-03-08 00:49:50.201592 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:49:50.201599 | orchestrator | 2026-03-08 00:49:50.201606 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-08 00:49:50.201613 | orchestrator | 2026-03-08 00:49:50.201620 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-08 00:49:50.201628 | orchestrator | Sunday 08 March 2026 00:49:26 +0000 (0:00:17.880) 0:01:59.047 ********** 2026-03-08 00:49:50.201635 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:49:50.201642 | orchestrator | 2026-03-08 00:49:50.201649 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-08 00:49:50.201656 | orchestrator | Sunday 08 March 2026 00:49:26 +0000 (0:00:00.769) 0:01:59.817 ********** 2026-03-08 00:49:50.201663 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:49:50.201671 | orchestrator | 2026-03-08 00:49:50.201678 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-08 00:49:50.201685 | orchestrator | Sunday 08 March 2026 00:49:27 +0000 (0:00:00.416) 0:02:00.234 ********** 2026-03-08 00:49:50.201692 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:49:50.201699 | orchestrator | 2026-03-08 00:49:50.201707 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-08 00:49:50.201718 | orchestrator | Sunday 08 March 2026 00:49:29 +0000 (0:00:02.124) 0:02:02.358 ********** 2026-03-08 00:49:50.201745 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:49:50.201754 | orchestrator | 2026-03-08 00:49:50.201761 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-03-08 00:49:50.201768 | orchestrator | 2026-03-08 00:49:50.201775 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-03-08 00:49:50.201782 | orchestrator | Sunday 08 March 2026 00:49:46 +0000 (0:00:17.244) 0:02:19.603 ********** 2026-03-08 00:49:50.201789 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:49:50.201796 | orchestrator | 2026-03-08 00:49:50.201803 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-03-08 00:49:50.201810 | orchestrator | Sunday 08 March 2026 00:49:47 +0000 (0:00:00.479) 0:02:20.082 ********** 2026-03-08 00:49:50.201818 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:49:50.201825 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:49:50.201832 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:49:50.201839 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-08 00:49:50.201846 | orchestrator | enable_outward_rabbitmq_True 2026-03-08 00:49:50.201853 | orchestrator | 2026-03-08 00:49:50.201861 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2026-03-08 00:49:50.201868 | orchestrator | skipping: no hosts matched 2026-03-08 00:49:50.201875 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-08 00:49:50.201882 | orchestrator | outward_rabbitmq_restart 2026-03-08 00:49:50.201889 | orchestrator | 2026-03-08 00:49:50.201896 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2026-03-08 00:49:50.201912 | orchestrator | skipping: no hosts matched 2026-03-08 00:49:50.201919 | orchestrator | 2026-03-08 00:49:50.201927 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2026-03-08 00:49:50.201934 | orchestrator | skipping: no hosts matched 2026-03-08 00:49:50.201941 | orchestrator | 2026-03-08 00:49:50.201948 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:49:50.201959 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-03-08 00:49:50.201967 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-08 00:49:50.201974 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-08 00:49:50.201981 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-08 00:49:50.201989 | orchestrator | 2026-03-08 00:49:50.201996 | orchestrator | 2026-03-08 00:49:50.202003 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 00:49:50.202010 | orchestrator | Sunday 08 March 2026 00:49:49 +0000 (0:00:02.435) 0:02:22.518 ********** 2026-03-08 00:49:50.202053 | orchestrator | =============================================================================== 2026-03-08 00:49:50.202063 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 88.50s 2026-03-08 00:49:50.202071 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 7.83s 2026-03-08 00:49:50.202080 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 6.81s 2026-03-08 00:49:50.202089 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 3.48s 2026-03-08 00:49:50.202097 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.97s 2026-03-08 00:49:50.202106 | orchestrator | Check RabbitMQ service -------------------------------------------------- 2.82s 2026-03-08 00:49:50.202114 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.60s 2026-03-08 00:49:50.202123 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.44s 2026-03-08 00:49:50.202132 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 2.42s 2026-03-08 00:49:50.202141 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.17s 2026-03-08 00:49:50.202149 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.80s 2026-03-08 00:49:50.202158 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.73s 2026-03-08 00:49:50.202166 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.59s 2026-03-08 00:49:50.202175 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.46s 2026-03-08 00:49:50.202183 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.40s 2026-03-08 00:49:50.202192 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.19s 2026-03-08 00:49:50.202201 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.10s 2026-03-08 00:49:50.202209 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 1.07s 2026-03-08 00:49:50.202218 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.05s 2026-03-08 00:49:50.202226 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 1.04s 2026-03-08 00:49:50.202235 | orchestrator | 2026-03-08 00:49:50 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:49:53.241866 | orchestrator | 2026-03-08 00:49:53 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:49:53.242228 | orchestrator | 2026-03-08 00:49:53 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:49:53.245867 | orchestrator | 2026-03-08 00:49:53 | INFO  | Task 20c38e53-9724-4435-b3ed-dca92c029193 is in state STARTED 2026-03-08 00:49:53.245911 | orchestrator | 2026-03-08 00:49:53 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:49:56.278941 | orchestrator | 2026-03-08 00:49:56 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:49:56.280676 | orchestrator | 2026-03-08 00:49:56 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:49:56.283524 | orchestrator | 2026-03-08 00:49:56 | INFO  | Task 20c38e53-9724-4435-b3ed-dca92c029193 is in state STARTED 2026-03-08 00:49:56.283581 | orchestrator | 2026-03-08 00:49:56 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:49:59.324741 | orchestrator | 2026-03-08 00:49:59 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:49:59.326988 | orchestrator | 2026-03-08 00:49:59 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:49:59.327915 | orchestrator | 2026-03-08 00:49:59 | INFO  | Task 20c38e53-9724-4435-b3ed-dca92c029193 is in state STARTED 2026-03-08 00:49:59.328157 | orchestrator | 2026-03-08 00:49:59 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:50:02.371214 | orchestrator | 2026-03-08 00:50:02 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:50:02.372934 | orchestrator | 2026-03-08 00:50:02 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:50:02.374651 | orchestrator | 2026-03-08 00:50:02 | INFO  | Task 20c38e53-9724-4435-b3ed-dca92c029193 is in state STARTED 2026-03-08 00:50:02.374899 | orchestrator | 2026-03-08 00:50:02 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:50:05.427959 | orchestrator | 2026-03-08 00:50:05 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:50:05.429151 | orchestrator | 2026-03-08 00:50:05 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:50:05.430090 | orchestrator | 2026-03-08 00:50:05 | INFO  | Task 20c38e53-9724-4435-b3ed-dca92c029193 is in state STARTED 2026-03-08 00:50:05.430162 | orchestrator | 2026-03-08 00:50:05 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:50:08.468621 | orchestrator | 2026-03-08 00:50:08 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:50:08.468773 | orchestrator | 2026-03-08 00:50:08 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:50:08.469069 | orchestrator | 2026-03-08 00:50:08 | INFO  | Task 20c38e53-9724-4435-b3ed-dca92c029193 is in state STARTED 2026-03-08 00:50:08.469161 | orchestrator | 2026-03-08 00:50:08 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:50:11.526932 | orchestrator | 2026-03-08 00:50:11 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:50:11.528949 | orchestrator | 2026-03-08 00:50:11 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:50:11.531101 | orchestrator | 2026-03-08 00:50:11 | INFO  | Task 20c38e53-9724-4435-b3ed-dca92c029193 is in state STARTED 2026-03-08 00:50:11.531267 | orchestrator | 2026-03-08 00:50:11 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:50:14.573188 | orchestrator | 2026-03-08 00:50:14 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:50:14.574235 | orchestrator | 2026-03-08 00:50:14 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:50:14.577762 | orchestrator | 2026-03-08 00:50:14 | INFO  | Task 20c38e53-9724-4435-b3ed-dca92c029193 is in state STARTED 2026-03-08 00:50:14.577824 | orchestrator | 2026-03-08 00:50:14 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:50:17.618774 | orchestrator | 2026-03-08 00:50:17 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:50:17.620276 | orchestrator | 2026-03-08 00:50:17 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:50:17.622364 | orchestrator | 2026-03-08 00:50:17 | INFO  | Task 20c38e53-9724-4435-b3ed-dca92c029193 is in state STARTED 2026-03-08 00:50:17.622419 | orchestrator | 2026-03-08 00:50:17 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:50:20.660164 | orchestrator | 2026-03-08 00:50:20 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:50:20.660452 | orchestrator | 2026-03-08 00:50:20 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:50:20.661220 | orchestrator | 2026-03-08 00:50:20 | INFO  | Task 20c38e53-9724-4435-b3ed-dca92c029193 is in state STARTED 2026-03-08 00:50:20.661260 | orchestrator | 2026-03-08 00:50:20 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:50:23.692025 | orchestrator | 2026-03-08 00:50:23 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:50:23.694248 | orchestrator | 2026-03-08 00:50:23 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:50:23.698520 | orchestrator | 2026-03-08 00:50:23 | INFO  | Task 20c38e53-9724-4435-b3ed-dca92c029193 is in state STARTED 2026-03-08 00:50:23.698846 | orchestrator | 2026-03-08 00:50:23 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:50:26.753402 | orchestrator | 2026-03-08 00:50:26 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:50:26.753720 | orchestrator | 2026-03-08 00:50:26 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:50:26.754485 | orchestrator | 2026-03-08 00:50:26 | INFO  | Task 20c38e53-9724-4435-b3ed-dca92c029193 is in state STARTED 2026-03-08 00:50:26.754507 | orchestrator | 2026-03-08 00:50:26 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:50:29.797097 | orchestrator | 2026-03-08 00:50:29 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:50:29.799191 | orchestrator | 2026-03-08 00:50:29 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:50:29.801281 | orchestrator | 2026-03-08 00:50:29 | INFO  | Task 20c38e53-9724-4435-b3ed-dca92c029193 is in state STARTED 2026-03-08 00:50:29.801326 | orchestrator | 2026-03-08 00:50:29 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:50:32.839510 | orchestrator | 2026-03-08 00:50:32 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:50:32.840751 | orchestrator | 2026-03-08 00:50:32 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:50:32.843466 | orchestrator | 2026-03-08 00:50:32 | INFO  | Task 20c38e53-9724-4435-b3ed-dca92c029193 is in state STARTED 2026-03-08 00:50:32.843531 | orchestrator | 2026-03-08 00:50:32 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:50:35.890160 | orchestrator | 2026-03-08 00:50:35 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:50:35.891702 | orchestrator | 2026-03-08 00:50:35 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:50:35.894507 | orchestrator | 2026-03-08 00:50:35 | INFO  | Task 20c38e53-9724-4435-b3ed-dca92c029193 is in state STARTED 2026-03-08 00:50:35.894551 | orchestrator | 2026-03-08 00:50:35 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:50:38.934432 | orchestrator | 2026-03-08 00:50:38 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:50:38.937307 | orchestrator | 2026-03-08 00:50:38 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:50:38.938990 | orchestrator | 2026-03-08 00:50:38 | INFO  | Task 20c38e53-9724-4435-b3ed-dca92c029193 is in state STARTED 2026-03-08 00:50:38.939040 | orchestrator | 2026-03-08 00:50:38 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:50:41.982310 | orchestrator | 2026-03-08 00:50:41 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:50:41.982519 | orchestrator | 2026-03-08 00:50:41 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:50:41.983580 | orchestrator | 2026-03-08 00:50:41 | INFO  | Task 20c38e53-9724-4435-b3ed-dca92c029193 is in state STARTED 2026-03-08 00:50:41.983609 | orchestrator | 2026-03-08 00:50:41 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:50:45.026221 | orchestrator | 2026-03-08 00:50:45 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:50:45.026398 | orchestrator | 2026-03-08 00:50:45 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:50:45.026462 | orchestrator | 2026-03-08 00:50:45 | INFO  | Task 20c38e53-9724-4435-b3ed-dca92c029193 is in state STARTED 2026-03-08 00:50:45.026474 | orchestrator | 2026-03-08 00:50:45 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:50:48.071545 | orchestrator | 2026-03-08 00:50:48 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:50:48.073332 | orchestrator | 2026-03-08 00:50:48 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:50:48.077144 | orchestrator | 2026-03-08 00:50:48 | INFO  | Task 20c38e53-9724-4435-b3ed-dca92c029193 is in state SUCCESS 2026-03-08 00:50:48.077510 | orchestrator | 2026-03-08 00:50:48 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:50:48.078996 | orchestrator | 2026-03-08 00:50:48.079032 | orchestrator | 2026-03-08 00:50:48.079040 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-08 00:50:48.079047 | orchestrator | 2026-03-08 00:50:48.079054 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-08 00:50:48.079061 | orchestrator | Sunday 08 March 2026 00:48:19 +0000 (0:00:00.195) 0:00:00.195 ********** 2026-03-08 00:50:48.079068 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:50:48.079075 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:50:48.079082 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:50:48.079089 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:50:48.079095 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:50:48.079102 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:50:48.079134 | orchestrator | 2026-03-08 00:50:48.079141 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-08 00:50:48.079148 | orchestrator | Sunday 08 March 2026 00:48:20 +0000 (0:00:00.836) 0:00:01.031 ********** 2026-03-08 00:50:48.079155 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-03-08 00:50:48.079162 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-03-08 00:50:48.079169 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-03-08 00:50:48.079175 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-03-08 00:50:48.079182 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-03-08 00:50:48.079189 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-03-08 00:50:48.079195 | orchestrator | 2026-03-08 00:50:48.079253 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-03-08 00:50:48.079262 | orchestrator | 2026-03-08 00:50:48.079277 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-03-08 00:50:48.079284 | orchestrator | Sunday 08 March 2026 00:48:21 +0000 (0:00:01.123) 0:00:02.154 ********** 2026-03-08 00:50:48.079291 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:50:48.079322 | orchestrator | 2026-03-08 00:50:48.079330 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-03-08 00:50:48.079336 | orchestrator | Sunday 08 March 2026 00:48:23 +0000 (0:00:02.134) 0:00:04.289 ********** 2026-03-08 00:50:48.079345 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:50:48.079420 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:50:48.079437 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:50:48.079449 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:50:48.079461 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:50:48.079472 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:50:48.079482 | orchestrator | 2026-03-08 00:50:48.079506 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-03-08 00:50:48.079519 | orchestrator | Sunday 08 March 2026 00:48:25 +0000 (0:00:01.754) 0:00:06.044 ********** 2026-03-08 00:50:48.079531 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:50:48.079553 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:50:48.079572 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:50:48.079580 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:50:48.079587 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:50:48.079594 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:50:48.079637 | orchestrator | 2026-03-08 00:50:48.079646 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-03-08 00:50:48.079654 | orchestrator | Sunday 08 March 2026 00:48:27 +0000 (0:00:01.675) 0:00:07.719 ********** 2026-03-08 00:50:48.079663 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:50:48.079671 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:50:48.079685 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:50:48.079694 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:50:48.079707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:50:48.079715 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:50:48.079724 | orchestrator | 2026-03-08 00:50:48.079732 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-03-08 00:50:48.079740 | orchestrator | Sunday 08 March 2026 00:48:28 +0000 (0:00:01.535) 0:00:09.255 ********** 2026-03-08 00:50:48.079748 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:50:48.079757 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:50:48.079765 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:50:48.079820 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:50:48.079833 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:50:48.079842 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:50:48.079885 | orchestrator | 2026-03-08 00:50:48.079899 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2026-03-08 00:50:48.079909 | orchestrator | Sunday 08 March 2026 00:48:30 +0000 (0:00:01.614) 0:00:10.869 ********** 2026-03-08 00:50:48.079917 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:50:48.079926 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:50:48.079938 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:50:48.079947 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:50:48.079955 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:50:48.079964 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:50:48.079972 | orchestrator | 2026-03-08 00:50:48.079980 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-03-08 00:50:48.079989 | orchestrator | Sunday 08 March 2026 00:48:31 +0000 (0:00:01.441) 0:00:12.311 ********** 2026-03-08 00:50:48.079996 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:50:48.080003 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:50:48.080010 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:50:48.080017 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:50:48.080024 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:50:48.080030 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:50:48.080037 | orchestrator | 2026-03-08 00:50:48.080043 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-03-08 00:50:48.080050 | orchestrator | Sunday 08 March 2026 00:48:34 +0000 (0:00:02.429) 0:00:14.740 ********** 2026-03-08 00:50:48.080061 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-03-08 00:50:48.080068 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-03-08 00:50:48.080074 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-08 00:50:48.080081 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-03-08 00:50:48.080088 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-03-08 00:50:48.080094 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-08 00:50:48.080101 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-03-08 00:50:48.080107 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-03-08 00:50:48.080117 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-08 00:50:48.080125 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-08 00:50:48.080132 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-08 00:50:48.080138 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-08 00:50:48.080145 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-08 00:50:48.080152 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-08 00:50:48.080159 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-08 00:50:48.080166 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-08 00:50:48.080173 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-08 00:50:48.080182 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-08 00:50:48.080189 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-08 00:50:48.080196 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-08 00:50:48.080203 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-08 00:50:48.080209 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-08 00:50:48.080216 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-08 00:50:48.080223 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-08 00:50:48.080229 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-08 00:50:48.080236 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-08 00:50:48.080242 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-08 00:50:48.080249 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-08 00:50:48.080256 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-08 00:50:48.080263 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-08 00:50:48.080273 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-08 00:50:48.080280 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-08 00:50:48.080287 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-08 00:50:48.080294 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-08 00:50:48.080300 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-08 00:50:48.080307 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-08 00:50:48.080314 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-08 00:50:48.080320 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-08 00:50:48.080327 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-03-08 00:50:48.080334 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-08 00:50:48.080341 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-08 00:50:48.080347 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-08 00:50:48.080354 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-08 00:50:48.080361 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-03-08 00:50:48.080370 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-08 00:50:48.080377 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-03-08 00:50:48.080384 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-03-08 00:50:48.080391 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-03-08 00:50:48.080397 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-03-08 00:50:48.080404 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-08 00:50:48.080411 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-08 00:50:48.080418 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-08 00:50:48.080425 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-08 00:50:48.080434 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-08 00:50:48.080441 | orchestrator | 2026-03-08 00:50:48.080448 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-08 00:50:48.080454 | orchestrator | Sunday 08 March 2026 00:48:52 +0000 (0:00:18.616) 0:00:33.357 ********** 2026-03-08 00:50:48.080461 | orchestrator | 2026-03-08 00:50:48.080468 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-08 00:50:48.080474 | orchestrator | Sunday 08 March 2026 00:48:52 +0000 (0:00:00.068) 0:00:33.425 ********** 2026-03-08 00:50:48.080485 | orchestrator | 2026-03-08 00:50:48.080491 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-08 00:50:48.080498 | orchestrator | Sunday 08 March 2026 00:48:52 +0000 (0:00:00.116) 0:00:33.542 ********** 2026-03-08 00:50:48.080505 | orchestrator | 2026-03-08 00:50:48.080511 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-08 00:50:48.080518 | orchestrator | Sunday 08 March 2026 00:48:52 +0000 (0:00:00.093) 0:00:33.636 ********** 2026-03-08 00:50:48.080524 | orchestrator | 2026-03-08 00:50:48.080531 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-08 00:50:48.080538 | orchestrator | Sunday 08 March 2026 00:48:53 +0000 (0:00:00.085) 0:00:33.721 ********** 2026-03-08 00:50:48.080544 | orchestrator | 2026-03-08 00:50:48.080551 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-08 00:50:48.080557 | orchestrator | Sunday 08 March 2026 00:48:53 +0000 (0:00:00.078) 0:00:33.800 ********** 2026-03-08 00:50:48.080564 | orchestrator | 2026-03-08 00:50:48.080571 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-03-08 00:50:48.080577 | orchestrator | Sunday 08 March 2026 00:48:53 +0000 (0:00:00.080) 0:00:33.881 ********** 2026-03-08 00:50:48.080584 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:50:48.080591 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:50:48.080597 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:50:48.080615 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:50:48.080622 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:50:48.080629 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:50:48.080635 | orchestrator | 2026-03-08 00:50:48.080642 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-03-08 00:50:48.080649 | orchestrator | Sunday 08 March 2026 00:48:55 +0000 (0:00:02.006) 0:00:35.888 ********** 2026-03-08 00:50:48.080656 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:50:48.080662 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:50:48.080669 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:50:48.080676 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:50:48.080682 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:50:48.080689 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:50:48.080696 | orchestrator | 2026-03-08 00:50:48.080703 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-03-08 00:50:48.080709 | orchestrator | 2026-03-08 00:50:48.080716 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-08 00:50:48.080723 | orchestrator | Sunday 08 March 2026 00:49:19 +0000 (0:00:23.983) 0:00:59.871 ********** 2026-03-08 00:50:48.080729 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:50:48.080736 | orchestrator | 2026-03-08 00:50:48.080743 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-08 00:50:48.080750 | orchestrator | Sunday 08 March 2026 00:49:20 +0000 (0:00:00.868) 0:01:00.740 ********** 2026-03-08 00:50:48.080756 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:50:48.080763 | orchestrator | 2026-03-08 00:50:48.080770 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-03-08 00:50:48.080776 | orchestrator | Sunday 08 March 2026 00:49:20 +0000 (0:00:00.649) 0:01:01.389 ********** 2026-03-08 00:50:48.080783 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:50:48.080790 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:50:48.080796 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:50:48.080803 | orchestrator | 2026-03-08 00:50:48.080810 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-03-08 00:50:48.080816 | orchestrator | Sunday 08 March 2026 00:49:21 +0000 (0:00:01.141) 0:01:02.530 ********** 2026-03-08 00:50:48.080823 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:50:48.080829 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:50:48.080836 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:50:48.080851 | orchestrator | 2026-03-08 00:50:48.080858 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-03-08 00:50:48.080865 | orchestrator | Sunday 08 March 2026 00:49:22 +0000 (0:00:00.337) 0:01:02.868 ********** 2026-03-08 00:50:48.080871 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:50:48.080878 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:50:48.080885 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:50:48.080891 | orchestrator | 2026-03-08 00:50:48.080898 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-03-08 00:50:48.080905 | orchestrator | Sunday 08 March 2026 00:49:22 +0000 (0:00:00.340) 0:01:03.208 ********** 2026-03-08 00:50:48.080911 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:50:48.080918 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:50:48.080924 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:50:48.080931 | orchestrator | 2026-03-08 00:50:48.080938 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-03-08 00:50:48.080944 | orchestrator | Sunday 08 March 2026 00:49:22 +0000 (0:00:00.333) 0:01:03.542 ********** 2026-03-08 00:50:48.080951 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:50:48.080958 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:50:48.080964 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:50:48.080971 | orchestrator | 2026-03-08 00:50:48.080978 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-03-08 00:50:48.080984 | orchestrator | Sunday 08 March 2026 00:49:23 +0000 (0:00:01.109) 0:01:04.651 ********** 2026-03-08 00:50:48.080991 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:50:48.080998 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:50:48.081004 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:50:48.081011 | orchestrator | 2026-03-08 00:50:48.081021 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-03-08 00:50:48.081028 | orchestrator | Sunday 08 March 2026 00:49:24 +0000 (0:00:00.337) 0:01:04.988 ********** 2026-03-08 00:50:48.081034 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:50:48.081041 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:50:48.081047 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:50:48.081054 | orchestrator | 2026-03-08 00:50:48.081061 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-03-08 00:50:48.081067 | orchestrator | Sunday 08 March 2026 00:49:24 +0000 (0:00:00.332) 0:01:05.321 ********** 2026-03-08 00:50:48.081074 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:50:48.081081 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:50:48.081087 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:50:48.081094 | orchestrator | 2026-03-08 00:50:48.081100 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-03-08 00:50:48.081107 | orchestrator | Sunday 08 March 2026 00:49:24 +0000 (0:00:00.341) 0:01:05.662 ********** 2026-03-08 00:50:48.081114 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:50:48.081120 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:50:48.081127 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:50:48.081134 | orchestrator | 2026-03-08 00:50:48.081140 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-03-08 00:50:48.081147 | orchestrator | Sunday 08 March 2026 00:49:25 +0000 (0:00:00.744) 0:01:06.406 ********** 2026-03-08 00:50:48.081154 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:50:48.081160 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:50:48.081167 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:50:48.081173 | orchestrator | 2026-03-08 00:50:48.081180 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-03-08 00:50:48.081187 | orchestrator | Sunday 08 March 2026 00:49:26 +0000 (0:00:00.484) 0:01:06.891 ********** 2026-03-08 00:50:48.081194 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:50:48.081200 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:50:48.081207 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:50:48.081213 | orchestrator | 2026-03-08 00:50:48.081220 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-03-08 00:50:48.081230 | orchestrator | Sunday 08 March 2026 00:49:26 +0000 (0:00:00.366) 0:01:07.258 ********** 2026-03-08 00:50:48.081237 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:50:48.081252 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:50:48.081259 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:50:48.081265 | orchestrator | 2026-03-08 00:50:48.081272 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-03-08 00:50:48.081278 | orchestrator | Sunday 08 March 2026 00:49:26 +0000 (0:00:00.338) 0:01:07.596 ********** 2026-03-08 00:50:48.081285 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:50:48.081292 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:50:48.081298 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:50:48.081305 | orchestrator | 2026-03-08 00:50:48.081312 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-03-08 00:50:48.081318 | orchestrator | Sunday 08 March 2026 00:49:27 +0000 (0:00:00.546) 0:01:08.142 ********** 2026-03-08 00:50:48.081325 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:50:48.081332 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:50:48.081338 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:50:48.081345 | orchestrator | 2026-03-08 00:50:48.081351 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-03-08 00:50:48.081358 | orchestrator | Sunday 08 March 2026 00:49:27 +0000 (0:00:00.312) 0:01:08.455 ********** 2026-03-08 00:50:48.081365 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:50:48.081371 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:50:48.081378 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:50:48.081385 | orchestrator | 2026-03-08 00:50:48.081391 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-03-08 00:50:48.081398 | orchestrator | Sunday 08 March 2026 00:49:28 +0000 (0:00:00.320) 0:01:08.775 ********** 2026-03-08 00:50:48.081405 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:50:48.081411 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:50:48.081418 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:50:48.081424 | orchestrator | 2026-03-08 00:50:48.081431 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-03-08 00:50:48.081438 | orchestrator | Sunday 08 March 2026 00:49:28 +0000 (0:00:00.271) 0:01:09.047 ********** 2026-03-08 00:50:48.081444 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:50:48.081451 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:50:48.081461 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:50:48.081468 | orchestrator | 2026-03-08 00:50:48.081475 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-08 00:50:48.081482 | orchestrator | Sunday 08 March 2026 00:49:28 +0000 (0:00:00.280) 0:01:09.327 ********** 2026-03-08 00:50:48.081489 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:50:48.081495 | orchestrator | 2026-03-08 00:50:48.081502 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-03-08 00:50:48.081509 | orchestrator | Sunday 08 March 2026 00:49:29 +0000 (0:00:00.722) 0:01:10.050 ********** 2026-03-08 00:50:48.081515 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:50:48.081522 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:50:48.081529 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:50:48.081535 | orchestrator | 2026-03-08 00:50:48.081542 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-03-08 00:50:48.081549 | orchestrator | Sunday 08 March 2026 00:49:29 +0000 (0:00:00.574) 0:01:10.624 ********** 2026-03-08 00:50:48.081555 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:50:48.081562 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:50:48.081568 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:50:48.081575 | orchestrator | 2026-03-08 00:50:48.081582 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-03-08 00:50:48.081588 | orchestrator | Sunday 08 March 2026 00:49:30 +0000 (0:00:00.540) 0:01:11.164 ********** 2026-03-08 00:50:48.081598 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:50:48.081616 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:50:48.081623 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:50:48.081633 | orchestrator | 2026-03-08 00:50:48.081639 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-03-08 00:50:48.081646 | orchestrator | Sunday 08 March 2026 00:49:31 +0000 (0:00:00.558) 0:01:11.723 ********** 2026-03-08 00:50:48.081653 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:50:48.081659 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:50:48.081666 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:50:48.081673 | orchestrator | 2026-03-08 00:50:48.081679 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-03-08 00:50:48.081686 | orchestrator | Sunday 08 March 2026 00:49:31 +0000 (0:00:00.405) 0:01:12.128 ********** 2026-03-08 00:50:48.081693 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:50:48.081699 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:50:48.081706 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:50:48.081712 | orchestrator | 2026-03-08 00:50:48.081719 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-03-08 00:50:48.081726 | orchestrator | Sunday 08 March 2026 00:49:31 +0000 (0:00:00.481) 0:01:12.610 ********** 2026-03-08 00:50:48.081732 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:50:48.081739 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:50:48.081745 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:50:48.081752 | orchestrator | 2026-03-08 00:50:48.081759 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-03-08 00:50:48.081765 | orchestrator | Sunday 08 March 2026 00:49:32 +0000 (0:00:00.381) 0:01:12.992 ********** 2026-03-08 00:50:48.081772 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:50:48.081778 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:50:48.081785 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:50:48.081792 | orchestrator | 2026-03-08 00:50:48.081798 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-03-08 00:50:48.081805 | orchestrator | Sunday 08 March 2026 00:49:32 +0000 (0:00:00.563) 0:01:13.555 ********** 2026-03-08 00:50:48.081812 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:50:48.081818 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:50:48.081825 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:50:48.081831 | orchestrator | 2026-03-08 00:50:48.081838 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-03-08 00:50:48.081845 | orchestrator | Sunday 08 March 2026 00:49:33 +0000 (0:00:00.384) 0:01:13.940 ********** 2026-03-08 00:50:48.081852 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:50:48.081866 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:50:48.081874 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:50:48.082149 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:50:48.082185 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:50:48.082197 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:50:48.082218 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:50:48.082231 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:50:48.082242 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:50:48.082253 | orchestrator | 2026-03-08 00:50:48.082260 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-03-08 00:50:48.082267 | orchestrator | Sunday 08 March 2026 00:49:34 +0000 (0:00:01.579) 0:01:15.519 ********** 2026-03-08 00:50:48.082274 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:50:48.082281 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:50:48.082288 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:50:48.082295 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:50:48.082313 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:50:48.082320 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:50:48.082327 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:50:48.082337 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:50:48.082352 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:50:48.082360 | orchestrator | 2026-03-08 00:50:48.082367 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-03-08 00:50:48.082373 | orchestrator | Sunday 08 March 2026 00:49:39 +0000 (0:00:04.487) 0:01:20.006 ********** 2026-03-08 00:50:48.082381 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:50:48.082388 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:50:48.082394 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:50:48.082402 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:50:48.082412 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:50:48.082423 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:50:48.082431 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:50:48.082437 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:50:48.082447 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:50:48.082454 | orchestrator | 2026-03-08 00:50:48.082461 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-08 00:50:48.082468 | orchestrator | Sunday 08 March 2026 00:49:41 +0000 (0:00:02.462) 0:01:22.469 ********** 2026-03-08 00:50:48.082475 | orchestrator | 2026-03-08 00:50:48.082482 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-08 00:50:48.082489 | orchestrator | Sunday 08 March 2026 00:49:41 +0000 (0:00:00.075) 0:01:22.545 ********** 2026-03-08 00:50:48.082495 | orchestrator | 2026-03-08 00:50:48.082502 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-08 00:50:48.082509 | orchestrator | Sunday 08 March 2026 00:49:41 +0000 (0:00:00.086) 0:01:22.631 ********** 2026-03-08 00:50:48.082515 | orchestrator | 2026-03-08 00:50:48.082522 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-03-08 00:50:48.082529 | orchestrator | Sunday 08 March 2026 00:49:42 +0000 (0:00:00.112) 0:01:22.743 ********** 2026-03-08 00:50:48.082535 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:50:48.082542 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:50:48.082549 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:50:48.082555 | orchestrator | 2026-03-08 00:50:48.082562 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-03-08 00:50:48.082569 | orchestrator | Sunday 08 March 2026 00:49:50 +0000 (0:00:08.075) 0:01:30.819 ********** 2026-03-08 00:50:48.082575 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:50:48.082582 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:50:48.082589 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:50:48.082595 | orchestrator | 2026-03-08 00:50:48.082616 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-03-08 00:50:48.082628 | orchestrator | Sunday 08 March 2026 00:49:57 +0000 (0:00:07.671) 0:01:38.491 ********** 2026-03-08 00:50:48.082634 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:50:48.082641 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:50:48.082648 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:50:48.082655 | orchestrator | 2026-03-08 00:50:48.082661 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-03-08 00:50:48.082668 | orchestrator | Sunday 08 March 2026 00:50:05 +0000 (0:00:07.462) 0:01:45.954 ********** 2026-03-08 00:50:48.082675 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:50:48.082681 | orchestrator | 2026-03-08 00:50:48.082688 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-03-08 00:50:48.082695 | orchestrator | Sunday 08 March 2026 00:50:05 +0000 (0:00:00.342) 0:01:46.296 ********** 2026-03-08 00:50:48.082702 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:50:48.082709 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:50:48.082715 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:50:48.082722 | orchestrator | 2026-03-08 00:50:48.082730 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-03-08 00:50:48.082738 | orchestrator | Sunday 08 March 2026 00:50:06 +0000 (0:00:00.837) 0:01:47.134 ********** 2026-03-08 00:50:48.082747 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:50:48.082755 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:50:48.082764 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:50:48.082773 | orchestrator | 2026-03-08 00:50:48.082782 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-03-08 00:50:48.082790 | orchestrator | Sunday 08 March 2026 00:50:07 +0000 (0:00:00.622) 0:01:47.757 ********** 2026-03-08 00:50:48.082798 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:50:48.082807 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:50:48.082816 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:50:48.082824 | orchestrator | 2026-03-08 00:50:48.082833 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-03-08 00:50:48.082842 | orchestrator | Sunday 08 March 2026 00:50:07 +0000 (0:00:00.795) 0:01:48.552 ********** 2026-03-08 00:50:48.082850 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:50:48.082859 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:50:48.082867 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:50:48.082876 | orchestrator | 2026-03-08 00:50:48.082885 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-03-08 00:50:48.082902 | orchestrator | Sunday 08 March 2026 00:50:08 +0000 (0:00:00.883) 0:01:49.436 ********** 2026-03-08 00:50:48.082911 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:50:48.082920 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:50:48.082933 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:50:48.082942 | orchestrator | 2026-03-08 00:50:48.082951 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-03-08 00:50:48.082960 | orchestrator | Sunday 08 March 2026 00:50:09 +0000 (0:00:00.951) 0:01:50.388 ********** 2026-03-08 00:50:48.082969 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:50:48.082977 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:50:48.082986 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:50:48.082995 | orchestrator | 2026-03-08 00:50:48.083003 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-03-08 00:50:48.083012 | orchestrator | Sunday 08 March 2026 00:50:10 +0000 (0:00:00.846) 0:01:51.234 ********** 2026-03-08 00:50:48.083021 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:50:48.083029 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:50:48.083038 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:50:48.083047 | orchestrator | 2026-03-08 00:50:48.083056 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-03-08 00:50:48.083064 | orchestrator | Sunday 08 March 2026 00:50:10 +0000 (0:00:00.349) 0:01:51.584 ********** 2026-03-08 00:50:48.083073 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:50:48.083089 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:50:48.083099 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:50:48.083108 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:50:48.083116 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:50:48.083124 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:50:48.083132 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:50:48.083139 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:50:48.083154 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:50:48.083162 | orchestrator | 2026-03-08 00:50:48.083170 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-03-08 00:50:48.083177 | orchestrator | Sunday 08 March 2026 00:50:12 +0000 (0:00:01.457) 0:01:53.042 ********** 2026-03-08 00:50:48.083184 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:50:48.083196 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:50:48.083207 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:50:48.083215 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:50:48.083223 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:50:48.083230 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:50:48.083238 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:50:48.083246 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:50:48.083253 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:50:48.083261 | orchestrator | 2026-03-08 00:50:48.083268 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-03-08 00:50:48.083275 | orchestrator | Sunday 08 March 2026 00:50:17 +0000 (0:00:04.727) 0:01:57.769 ********** 2026-03-08 00:50:48.083286 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:50:48.083298 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:50:48.083305 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:50:48.083316 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:50:48.083324 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:50:48.083332 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:50:48.083339 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:50:48.083347 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:50:48.083354 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 00:50:48.083362 | orchestrator | 2026-03-08 00:50:48.083369 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-08 00:50:48.083376 | orchestrator | Sunday 08 March 2026 00:50:20 +0000 (0:00:03.086) 0:02:00.856 ********** 2026-03-08 00:50:48.083384 | orchestrator | 2026-03-08 00:50:48.083391 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-08 00:50:48.083398 | orchestrator | Sunday 08 March 2026 00:50:20 +0000 (0:00:00.072) 0:02:00.929 ********** 2026-03-08 00:50:48.083405 | orchestrator | 2026-03-08 00:50:48.083413 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-08 00:50:48.083423 | orchestrator | Sunday 08 March 2026 00:50:20 +0000 (0:00:00.085) 0:02:01.014 ********** 2026-03-08 00:50:48.083431 | orchestrator | 2026-03-08 00:50:48.083443 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-03-08 00:50:48.083459 | orchestrator | Sunday 08 March 2026 00:50:20 +0000 (0:00:00.085) 0:02:01.100 ********** 2026-03-08 00:50:48.083477 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:50:48.083489 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:50:48.083502 | orchestrator | 2026-03-08 00:50:48.083520 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-03-08 00:50:48.083533 | orchestrator | Sunday 08 March 2026 00:50:26 +0000 (0:00:06.251) 0:02:07.351 ********** 2026-03-08 00:50:48.083545 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:50:48.083559 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:50:48.083571 | orchestrator | 2026-03-08 00:50:48.083585 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-03-08 00:50:48.083598 | orchestrator | Sunday 08 March 2026 00:50:32 +0000 (0:00:06.253) 0:02:13.604 ********** 2026-03-08 00:50:48.083639 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:50:48.083648 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:50:48.083655 | orchestrator | 2026-03-08 00:50:48.083662 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-03-08 00:50:48.083669 | orchestrator | Sunday 08 March 2026 00:50:39 +0000 (0:00:06.392) 0:02:19.997 ********** 2026-03-08 00:50:48.083676 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:50:48.083683 | orchestrator | 2026-03-08 00:50:48.083690 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-03-08 00:50:48.083697 | orchestrator | Sunday 08 March 2026 00:50:39 +0000 (0:00:00.140) 0:02:20.138 ********** 2026-03-08 00:50:48.083704 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:50:48.083712 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:50:48.083719 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:50:48.083726 | orchestrator | 2026-03-08 00:50:48.083733 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-03-08 00:50:48.083740 | orchestrator | Sunday 08 March 2026 00:50:40 +0000 (0:00:00.886) 0:02:21.024 ********** 2026-03-08 00:50:48.083752 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:50:48.083759 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:50:48.083766 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:50:48.083774 | orchestrator | 2026-03-08 00:50:48.083781 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-03-08 00:50:48.083788 | orchestrator | Sunday 08 March 2026 00:50:41 +0000 (0:00:00.710) 0:02:21.734 ********** 2026-03-08 00:50:48.083795 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:50:48.083802 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:50:48.083809 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:50:48.083816 | orchestrator | 2026-03-08 00:50:48.083824 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-03-08 00:50:48.083831 | orchestrator | Sunday 08 March 2026 00:50:41 +0000 (0:00:00.818) 0:02:22.553 ********** 2026-03-08 00:50:48.083838 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:50:48.083845 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:50:48.083853 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:50:48.083860 | orchestrator | 2026-03-08 00:50:48.083867 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-03-08 00:50:48.083874 | orchestrator | Sunday 08 March 2026 00:50:42 +0000 (0:00:00.754) 0:02:23.307 ********** 2026-03-08 00:50:48.083881 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:50:48.083889 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:50:48.083896 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:50:48.083903 | orchestrator | 2026-03-08 00:50:48.083910 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-03-08 00:50:48.083917 | orchestrator | Sunday 08 March 2026 00:50:43 +0000 (0:00:00.957) 0:02:24.265 ********** 2026-03-08 00:50:48.083924 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:50:48.083938 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:50:48.083945 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:50:48.083953 | orchestrator | 2026-03-08 00:50:48.083960 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:50:48.083967 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-08 00:50:48.083975 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-03-08 00:50:48.083982 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-03-08 00:50:48.083990 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:50:48.083997 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:50:48.084004 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:50:48.084012 | orchestrator | 2026-03-08 00:50:48.084019 | orchestrator | 2026-03-08 00:50:48.084026 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 00:50:48.084033 | orchestrator | Sunday 08 March 2026 00:50:44 +0000 (0:00:00.955) 0:02:25.220 ********** 2026-03-08 00:50:48.084040 | orchestrator | =============================================================================== 2026-03-08 00:50:48.084048 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 23.98s 2026-03-08 00:50:48.084055 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 18.61s 2026-03-08 00:50:48.084062 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 14.33s 2026-03-08 00:50:48.084069 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 13.92s 2026-03-08 00:50:48.084076 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 13.86s 2026-03-08 00:50:48.084083 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.73s 2026-03-08 00:50:48.084091 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.49s 2026-03-08 00:50:48.084103 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.09s 2026-03-08 00:50:48.084110 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.46s 2026-03-08 00:50:48.084117 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.43s 2026-03-08 00:50:48.084124 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 2.13s 2026-03-08 00:50:48.084132 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 2.01s 2026-03-08 00:50:48.084139 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.75s 2026-03-08 00:50:48.084146 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.68s 2026-03-08 00:50:48.084153 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.61s 2026-03-08 00:50:48.084160 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.58s 2026-03-08 00:50:48.084168 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.54s 2026-03-08 00:50:48.084175 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.46s 2026-03-08 00:50:48.084182 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.44s 2026-03-08 00:50:48.084189 | orchestrator | ovn-db : Checking for any existing OVN DB container volumes ------------- 1.14s 2026-03-08 00:50:51.120299 | orchestrator | 2026-03-08 00:50:51 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:50:51.122249 | orchestrator | 2026-03-08 00:50:51 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:50:51.122295 | orchestrator | 2026-03-08 00:50:51 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:50:54.165971 | orchestrator | 2026-03-08 00:50:54 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:50:54.169043 | orchestrator | 2026-03-08 00:50:54 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:50:54.169155 | orchestrator | 2026-03-08 00:50:54 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:50:57.210447 | orchestrator | 2026-03-08 00:50:57 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:50:57.211591 | orchestrator | 2026-03-08 00:50:57 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:50:57.211748 | orchestrator | 2026-03-08 00:50:57 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:51:00.246066 | orchestrator | 2026-03-08 00:51:00 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:51:00.246136 | orchestrator | 2026-03-08 00:51:00 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:51:00.246143 | orchestrator | 2026-03-08 00:51:00 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:51:03.286125 | orchestrator | 2026-03-08 00:51:03 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:51:03.286975 | orchestrator | 2026-03-08 00:51:03 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:51:03.287029 | orchestrator | 2026-03-08 00:51:03 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:51:06.322585 | orchestrator | 2026-03-08 00:51:06 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:51:06.323702 | orchestrator | 2026-03-08 00:51:06 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:51:06.323740 | orchestrator | 2026-03-08 00:51:06 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:51:09.367964 | orchestrator | 2026-03-08 00:51:09 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:51:09.369371 | orchestrator | 2026-03-08 00:51:09 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:51:09.369413 | orchestrator | 2026-03-08 00:51:09 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:51:12.412188 | orchestrator | 2026-03-08 00:51:12 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:51:12.412310 | orchestrator | 2026-03-08 00:51:12 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:51:12.412335 | orchestrator | 2026-03-08 00:51:12 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:51:15.444811 | orchestrator | 2026-03-08 00:51:15 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:51:15.444984 | orchestrator | 2026-03-08 00:51:15 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:51:15.444998 | orchestrator | 2026-03-08 00:51:15 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:51:18.481722 | orchestrator | 2026-03-08 00:51:18 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:51:18.482931 | orchestrator | 2026-03-08 00:51:18 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:51:18.482995 | orchestrator | 2026-03-08 00:51:18 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:51:21.518200 | orchestrator | 2026-03-08 00:51:21 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:51:21.519189 | orchestrator | 2026-03-08 00:51:21 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:51:21.519233 | orchestrator | 2026-03-08 00:51:21 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:51:24.548977 | orchestrator | 2026-03-08 00:51:24 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:51:24.550434 | orchestrator | 2026-03-08 00:51:24 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:51:24.550482 | orchestrator | 2026-03-08 00:51:24 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:51:27.584097 | orchestrator | 2026-03-08 00:51:27 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:51:27.587342 | orchestrator | 2026-03-08 00:51:27 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:51:27.587391 | orchestrator | 2026-03-08 00:51:27 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:51:30.641101 | orchestrator | 2026-03-08 00:51:30 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:51:30.643722 | orchestrator | 2026-03-08 00:51:30 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:51:30.643769 | orchestrator | 2026-03-08 00:51:30 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:51:33.703374 | orchestrator | 2026-03-08 00:51:33 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:51:33.707736 | orchestrator | 2026-03-08 00:51:33 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:51:33.708336 | orchestrator | 2026-03-08 00:51:33 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:51:36.746271 | orchestrator | 2026-03-08 00:51:36 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:51:36.746368 | orchestrator | 2026-03-08 00:51:36 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:51:36.746386 | orchestrator | 2026-03-08 00:51:36 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:51:39.793551 | orchestrator | 2026-03-08 00:51:39 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:51:39.795532 | orchestrator | 2026-03-08 00:51:39 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:51:39.795651 | orchestrator | 2026-03-08 00:51:39 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:51:42.842107 | orchestrator | 2026-03-08 00:51:42 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:51:42.844910 | orchestrator | 2026-03-08 00:51:42 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:51:42.844974 | orchestrator | 2026-03-08 00:51:42 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:51:45.877710 | orchestrator | 2026-03-08 00:51:45 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:51:45.879555 | orchestrator | 2026-03-08 00:51:45 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:51:45.879622 | orchestrator | 2026-03-08 00:51:45 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:51:48.932447 | orchestrator | 2026-03-08 00:51:48 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:51:48.933817 | orchestrator | 2026-03-08 00:51:48 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:51:48.933893 | orchestrator | 2026-03-08 00:51:48 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:51:51.965674 | orchestrator | 2026-03-08 00:51:51 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:51:51.965767 | orchestrator | 2026-03-08 00:51:51 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:51:51.965779 | orchestrator | 2026-03-08 00:51:51 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:51:55.009419 | orchestrator | 2026-03-08 00:51:55 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:51:55.012570 | orchestrator | 2026-03-08 00:51:55 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:51:55.012649 | orchestrator | 2026-03-08 00:51:55 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:51:58.046574 | orchestrator | 2026-03-08 00:51:58 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:51:58.048298 | orchestrator | 2026-03-08 00:51:58 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:51:58.048370 | orchestrator | 2026-03-08 00:51:58 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:52:01.085753 | orchestrator | 2026-03-08 00:52:01 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:52:01.087720 | orchestrator | 2026-03-08 00:52:01 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:52:01.087792 | orchestrator | 2026-03-08 00:52:01 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:52:04.126531 | orchestrator | 2026-03-08 00:52:04 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:52:04.128083 | orchestrator | 2026-03-08 00:52:04 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:52:04.128508 | orchestrator | 2026-03-08 00:52:04 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:52:07.167130 | orchestrator | 2026-03-08 00:52:07 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:52:07.168241 | orchestrator | 2026-03-08 00:52:07 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:52:07.168783 | orchestrator | 2026-03-08 00:52:07 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:52:10.204056 | orchestrator | 2026-03-08 00:52:10 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:52:10.205097 | orchestrator | 2026-03-08 00:52:10 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:52:10.205142 | orchestrator | 2026-03-08 00:52:10 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:52:13.247536 | orchestrator | 2026-03-08 00:52:13 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:52:13.249909 | orchestrator | 2026-03-08 00:52:13 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:52:13.249958 | orchestrator | 2026-03-08 00:52:13 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:52:16.293764 | orchestrator | 2026-03-08 00:52:16 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:52:16.295880 | orchestrator | 2026-03-08 00:52:16 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:52:16.295965 | orchestrator | 2026-03-08 00:52:16 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:52:19.337305 | orchestrator | 2026-03-08 00:52:19 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:52:19.338925 | orchestrator | 2026-03-08 00:52:19 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:52:19.339097 | orchestrator | 2026-03-08 00:52:19 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:52:22.387085 | orchestrator | 2026-03-08 00:52:22 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:52:22.387166 | orchestrator | 2026-03-08 00:52:22 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:52:22.387175 | orchestrator | 2026-03-08 00:52:22 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:52:25.431160 | orchestrator | 2026-03-08 00:52:25 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:52:25.431242 | orchestrator | 2026-03-08 00:52:25 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:52:25.432455 | orchestrator | 2026-03-08 00:52:25 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:52:28.477286 | orchestrator | 2026-03-08 00:52:28 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:52:28.477617 | orchestrator | 2026-03-08 00:52:28 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:52:28.477992 | orchestrator | 2026-03-08 00:52:28 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:52:31.533559 | orchestrator | 2026-03-08 00:52:31 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:52:31.534589 | orchestrator | 2026-03-08 00:52:31 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:52:31.534631 | orchestrator | 2026-03-08 00:52:31 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:52:34.583070 | orchestrator | 2026-03-08 00:52:34 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:52:34.583168 | orchestrator | 2026-03-08 00:52:34 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:52:34.583181 | orchestrator | 2026-03-08 00:52:34 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:52:37.625472 | orchestrator | 2026-03-08 00:52:37 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:52:37.628077 | orchestrator | 2026-03-08 00:52:37 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:52:37.628137 | orchestrator | 2026-03-08 00:52:37 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:52:40.669943 | orchestrator | 2026-03-08 00:52:40 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:52:40.671919 | orchestrator | 2026-03-08 00:52:40 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:52:40.671983 | orchestrator | 2026-03-08 00:52:40 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:52:43.717476 | orchestrator | 2026-03-08 00:52:43 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:52:43.720248 | orchestrator | 2026-03-08 00:52:43 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:52:43.720480 | orchestrator | 2026-03-08 00:52:43 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:52:46.773720 | orchestrator | 2026-03-08 00:52:46 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:52:46.775545 | orchestrator | 2026-03-08 00:52:46 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:52:46.775742 | orchestrator | 2026-03-08 00:52:46 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:52:49.814665 | orchestrator | 2026-03-08 00:52:49 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:52:49.817755 | orchestrator | 2026-03-08 00:52:49 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:52:49.817821 | orchestrator | 2026-03-08 00:52:49 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:52:52.859514 | orchestrator | 2026-03-08 00:52:52 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:52:52.862286 | orchestrator | 2026-03-08 00:52:52 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:52:52.862984 | orchestrator | 2026-03-08 00:52:52 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:52:55.917037 | orchestrator | 2026-03-08 00:52:55 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:52:55.918760 | orchestrator | 2026-03-08 00:52:55 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:52:55.918815 | orchestrator | 2026-03-08 00:52:55 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:52:58.967034 | orchestrator | 2026-03-08 00:52:58 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:52:58.968898 | orchestrator | 2026-03-08 00:52:58 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:52:58.968940 | orchestrator | 2026-03-08 00:52:58 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:53:02.016156 | orchestrator | 2026-03-08 00:53:02 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:53:02.016870 | orchestrator | 2026-03-08 00:53:02 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:53:02.016927 | orchestrator | 2026-03-08 00:53:02 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:53:05.051945 | orchestrator | 2026-03-08 00:53:05 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:53:05.052191 | orchestrator | 2026-03-08 00:53:05 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:53:05.052211 | orchestrator | 2026-03-08 00:53:05 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:53:08.082737 | orchestrator | 2026-03-08 00:53:08 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:53:08.083532 | orchestrator | 2026-03-08 00:53:08 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:53:08.083565 | orchestrator | 2026-03-08 00:53:08 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:53:11.137665 | orchestrator | 2026-03-08 00:53:11 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:53:11.141541 | orchestrator | 2026-03-08 00:53:11 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:53:11.141593 | orchestrator | 2026-03-08 00:53:11 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:53:14.183381 | orchestrator | 2026-03-08 00:53:14 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:53:14.184613 | orchestrator | 2026-03-08 00:53:14 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:53:14.185144 | orchestrator | 2026-03-08 00:53:14 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:53:17.222902 | orchestrator | 2026-03-08 00:53:17 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:53:17.223196 | orchestrator | 2026-03-08 00:53:17 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:53:17.223219 | orchestrator | 2026-03-08 00:53:17 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:53:20.263879 | orchestrator | 2026-03-08 00:53:20 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:53:20.265426 | orchestrator | 2026-03-08 00:53:20 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:53:20.265476 | orchestrator | 2026-03-08 00:53:20 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:53:23.323180 | orchestrator | 2026-03-08 00:53:23 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:53:23.325447 | orchestrator | 2026-03-08 00:53:23 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:53:23.325504 | orchestrator | 2026-03-08 00:53:23 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:53:26.375410 | orchestrator | 2026-03-08 00:53:26 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:53:26.377136 | orchestrator | 2026-03-08 00:53:26 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:53:26.377201 | orchestrator | 2026-03-08 00:53:26 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:53:29.435512 | orchestrator | 2026-03-08 00:53:29 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:53:29.436991 | orchestrator | 2026-03-08 00:53:29 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:53:29.437114 | orchestrator | 2026-03-08 00:53:29 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:53:32.486480 | orchestrator | 2026-03-08 00:53:32 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:53:32.486612 | orchestrator | 2026-03-08 00:53:32 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:53:32.486622 | orchestrator | 2026-03-08 00:53:32 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:53:35.536010 | orchestrator | 2026-03-08 00:53:35 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:53:35.538265 | orchestrator | 2026-03-08 00:53:35 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:53:35.538336 | orchestrator | 2026-03-08 00:53:35 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:53:38.583817 | orchestrator | 2026-03-08 00:53:38 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:53:38.584977 | orchestrator | 2026-03-08 00:53:38 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state STARTED 2026-03-08 00:53:38.585016 | orchestrator | 2026-03-08 00:53:38 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:53:41.636641 | orchestrator | 2026-03-08 00:53:41 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:53:41.643518 | orchestrator | 2026-03-08 00:53:41 | INFO  | Task 212d0947-077b-4d0e-8d25-698352b4219f is in state SUCCESS 2026-03-08 00:53:41.645661 | orchestrator | 2026-03-08 00:53:41.645748 | orchestrator | 2026-03-08 00:53:41.645762 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-08 00:53:41.645770 | orchestrator | 2026-03-08 00:53:41.645777 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-08 00:53:41.645784 | orchestrator | Sunday 08 March 2026 00:47:09 +0000 (0:00:00.367) 0:00:00.367 ********** 2026-03-08 00:53:41.645791 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:53:41.645798 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:53:41.645803 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:53:41.645809 | orchestrator | 2026-03-08 00:53:41.645815 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-08 00:53:41.645872 | orchestrator | Sunday 08 March 2026 00:47:10 +0000 (0:00:00.526) 0:00:00.893 ********** 2026-03-08 00:53:41.645881 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-03-08 00:53:41.645887 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-03-08 00:53:41.645894 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-03-08 00:53:41.645900 | orchestrator | 2026-03-08 00:53:41.645906 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-03-08 00:53:41.645912 | orchestrator | 2026-03-08 00:53:41.645919 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-08 00:53:41.645925 | orchestrator | Sunday 08 March 2026 00:47:11 +0000 (0:00:00.596) 0:00:01.489 ********** 2026-03-08 00:53:41.645932 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:53:41.645938 | orchestrator | 2026-03-08 00:53:41.645944 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-03-08 00:53:41.645951 | orchestrator | Sunday 08 March 2026 00:47:11 +0000 (0:00:00.659) 0:00:02.149 ********** 2026-03-08 00:53:41.645957 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:53:41.645962 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:53:41.645968 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:53:41.645974 | orchestrator | 2026-03-08 00:53:41.645980 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-03-08 00:53:41.645987 | orchestrator | Sunday 08 March 2026 00:47:12 +0000 (0:00:01.015) 0:00:03.164 ********** 2026-03-08 00:53:41.646005 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:53:41.646048 | orchestrator | 2026-03-08 00:53:41.646058 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-03-08 00:53:41.646065 | orchestrator | Sunday 08 March 2026 00:47:13 +0000 (0:00:01.043) 0:00:04.208 ********** 2026-03-08 00:53:41.646071 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:53:41.646077 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:53:41.646084 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:53:41.646090 | orchestrator | 2026-03-08 00:53:41.646096 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-03-08 00:53:41.646103 | orchestrator | Sunday 08 March 2026 00:47:14 +0000 (0:00:00.789) 0:00:04.997 ********** 2026-03-08 00:53:41.646110 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-08 00:53:41.646117 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-08 00:53:41.646169 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-08 00:53:41.646237 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-08 00:53:41.646246 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-08 00:53:41.646253 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-08 00:53:41.646259 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-08 00:53:41.646267 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-08 00:53:41.646274 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-08 00:53:41.646280 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-08 00:53:41.646287 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-08 00:53:41.646293 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-08 00:53:41.646300 | orchestrator | 2026-03-08 00:53:41.646306 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-08 00:53:41.646313 | orchestrator | Sunday 08 March 2026 00:47:17 +0000 (0:00:03.422) 0:00:08.420 ********** 2026-03-08 00:53:41.646329 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-03-08 00:53:41.646336 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-03-08 00:53:41.646343 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-03-08 00:53:41.646349 | orchestrator | 2026-03-08 00:53:41.646355 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-08 00:53:41.646362 | orchestrator | Sunday 08 March 2026 00:47:18 +0000 (0:00:00.763) 0:00:09.183 ********** 2026-03-08 00:53:41.646368 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-03-08 00:53:41.646375 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-03-08 00:53:41.646381 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-03-08 00:53:41.646388 | orchestrator | 2026-03-08 00:53:41.646394 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-08 00:53:41.646401 | orchestrator | Sunday 08 March 2026 00:47:20 +0000 (0:00:01.834) 0:00:11.018 ********** 2026-03-08 00:53:41.646407 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-03-08 00:53:41.646414 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.646436 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-03-08 00:53:41.646443 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.646450 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-03-08 00:53:41.646457 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.646463 | orchestrator | 2026-03-08 00:53:41.646469 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-03-08 00:53:41.646476 | orchestrator | Sunday 08 March 2026 00:47:21 +0000 (0:00:00.833) 0:00:11.852 ********** 2026-03-08 00:53:41.646485 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-08 00:53:41.646503 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-08 00:53:41.646510 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-08 00:53:41.646517 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-08 00:53:41.646529 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-08 00:53:41.646541 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-08 00:53:41.646548 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-08 00:53:41.646555 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-08 00:53:41.646565 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-08 00:53:41.646573 | orchestrator | 2026-03-08 00:53:41.646579 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-03-08 00:53:41.646586 | orchestrator | Sunday 08 March 2026 00:47:24 +0000 (0:00:03.016) 0:00:14.868 ********** 2026-03-08 00:53:41.646593 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:53:41.646599 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:53:41.646605 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:53:41.646611 | orchestrator | 2026-03-08 00:53:41.646617 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-03-08 00:53:41.646623 | orchestrator | Sunday 08 March 2026 00:47:26 +0000 (0:00:01.906) 0:00:16.775 ********** 2026-03-08 00:53:41.646629 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-03-08 00:53:41.646636 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-03-08 00:53:41.646647 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-03-08 00:53:41.646653 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-03-08 00:53:41.646660 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-03-08 00:53:41.646666 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-03-08 00:53:41.646672 | orchestrator | 2026-03-08 00:53:41.646678 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-03-08 00:53:41.646685 | orchestrator | Sunday 08 March 2026 00:47:28 +0000 (0:00:02.078) 0:00:18.854 ********** 2026-03-08 00:53:41.646691 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:53:41.646697 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:53:41.646703 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:53:41.646709 | orchestrator | 2026-03-08 00:53:41.646751 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-03-08 00:53:41.646758 | orchestrator | Sunday 08 March 2026 00:47:29 +0000 (0:00:01.243) 0:00:20.097 ********** 2026-03-08 00:53:41.646764 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:53:41.646771 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:53:41.646777 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:53:41.646783 | orchestrator | 2026-03-08 00:53:41.646790 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-03-08 00:53:41.646796 | orchestrator | Sunday 08 March 2026 00:47:32 +0000 (0:00:03.049) 0:00:23.147 ********** 2026-03-08 00:53:41.646802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-08 00:53:41.646815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-08 00:53:41.646822 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-08 00:53:41.646830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-08 00:53:41.646840 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__9adf3265a0ca0bf3c20b2f07aa07d7c46c9f2f4c', '__omit_place_holder__9adf3265a0ca0bf3c20b2f07aa07d7c46c9f2f4c'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-08 00:53:41.646853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-08 00:53:41.646860 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.646867 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-08 00:53:41.646873 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__9adf3265a0ca0bf3c20b2f07aa07d7c46c9f2f4c', '__omit_place_holder__9adf3265a0ca0bf3c20b2f07aa07d7c46c9f2f4c'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-08 00:53:41.646880 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.646892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-08 00:53:41.646899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-08 00:53:41.646920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-08 00:53:41.646991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__9adf3265a0ca0bf3c20b2f07aa07d7c46c9f2f4c', '__omit_place_holder__9adf3265a0ca0bf3c20b2f07aa07d7c46c9f2f4c'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-08 00:53:41.646998 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.647005 | orchestrator | 2026-03-08 00:53:41.647011 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-03-08 00:53:41.647017 | orchestrator | Sunday 08 March 2026 00:47:33 +0000 (0:00:01.041) 0:00:24.188 ********** 2026-03-08 00:53:41.647036 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-08 00:53:41.647043 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-08 00:53:41.647069 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-08 00:53:41.647076 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-08 00:53:41.647088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-08 00:53:41.647097 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__9adf3265a0ca0bf3c20b2f07aa07d7c46c9f2f4c', '__omit_place_holder__9adf3265a0ca0bf3c20b2f07aa07d7c46c9f2f4c'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-08 00:53:41.647104 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-08 00:53:41.647110 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-08 00:53:41.647117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__9adf3265a0ca0bf3c20b2f07aa07d7c46c9f2f4c', '__omit_place_holder__9adf3265a0ca0bf3c20b2f07aa07d7c46c9f2f4c'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-08 00:53:41.647127 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-08 00:53:41.647134 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-08 00:53:41.647152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__9adf3265a0ca0bf3c20b2f07aa07d7c46c9f2f4c', '__omit_place_holder__9adf3265a0ca0bf3c20b2f07aa07d7c46c9f2f4c'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-08 00:53:41.647159 | orchestrator | 2026-03-08 00:53:41.647167 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-03-08 00:53:41.647173 | orchestrator | Sunday 08 March 2026 00:47:36 +0000 (0:00:03.056) 0:00:27.244 ********** 2026-03-08 00:53:41.647197 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-08 00:53:41.647204 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-08 00:53:41.647210 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-08 00:53:41.647222 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-08 00:53:41.647229 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-08 00:53:41.647243 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-08 00:53:41.647249 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-08 00:53:41.647256 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-08 00:53:41.647262 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-08 00:53:41.647268 | orchestrator | 2026-03-08 00:53:41.647274 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-03-08 00:53:41.647279 | orchestrator | Sunday 08 March 2026 00:47:40 +0000 (0:00:03.837) 0:00:31.082 ********** 2026-03-08 00:53:41.647285 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-08 00:53:41.647292 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-08 00:53:41.647299 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-08 00:53:41.647305 | orchestrator | 2026-03-08 00:53:41.647311 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-03-08 00:53:41.647318 | orchestrator | Sunday 08 March 2026 00:47:44 +0000 (0:00:03.376) 0:00:34.458 ********** 2026-03-08 00:53:41.647324 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-08 00:53:41.647330 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-08 00:53:41.647336 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-08 00:53:41.647348 | orchestrator | 2026-03-08 00:53:41.647532 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-03-08 00:53:41.647542 | orchestrator | Sunday 08 March 2026 00:47:47 +0000 (0:00:03.953) 0:00:38.412 ********** 2026-03-08 00:53:41.647549 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.647556 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.647562 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.647568 | orchestrator | 2026-03-08 00:53:41.647574 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-03-08 00:53:41.647581 | orchestrator | Sunday 08 March 2026 00:47:48 +0000 (0:00:00.590) 0:00:39.003 ********** 2026-03-08 00:53:41.647587 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-08 00:53:41.647595 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-08 00:53:41.647601 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-08 00:53:41.647607 | orchestrator | 2026-03-08 00:53:41.647613 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-03-08 00:53:41.647620 | orchestrator | Sunday 08 March 2026 00:47:52 +0000 (0:00:04.067) 0:00:43.070 ********** 2026-03-08 00:53:41.647626 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-08 00:53:41.647633 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-08 00:53:41.647639 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-08 00:53:41.647645 | orchestrator | 2026-03-08 00:53:41.647651 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-03-08 00:53:41.647658 | orchestrator | Sunday 08 March 2026 00:47:55 +0000 (0:00:02.665) 0:00:45.735 ********** 2026-03-08 00:53:41.647669 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-03-08 00:53:41.647675 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-03-08 00:53:41.647682 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-03-08 00:53:41.647688 | orchestrator | 2026-03-08 00:53:41.647694 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-03-08 00:53:41.647700 | orchestrator | Sunday 08 March 2026 00:47:57 +0000 (0:00:02.529) 0:00:48.265 ********** 2026-03-08 00:53:41.647707 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-03-08 00:53:41.647713 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-03-08 00:53:41.647719 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-03-08 00:53:41.647726 | orchestrator | 2026-03-08 00:53:41.647732 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-08 00:53:41.647738 | orchestrator | Sunday 08 March 2026 00:47:59 +0000 (0:00:01.922) 0:00:50.188 ********** 2026-03-08 00:53:41.647745 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:53:41.647751 | orchestrator | 2026-03-08 00:53:41.647838 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2026-03-08 00:53:41.647845 | orchestrator | Sunday 08 March 2026 00:48:01 +0000 (0:00:01.485) 0:00:51.673 ********** 2026-03-08 00:53:41.647852 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-08 00:53:41.647867 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-08 00:53:41.647879 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-08 00:53:41.647887 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-08 00:53:41.647898 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-08 00:53:41.647906 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-08 00:53:41.647913 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-08 00:53:41.647920 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-08 00:53:41.647932 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-08 00:53:41.647939 | orchestrator | 2026-03-08 00:53:41.647946 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2026-03-08 00:53:41.647952 | orchestrator | Sunday 08 March 2026 00:48:04 +0000 (0:00:03.662) 0:00:55.335 ********** 2026-03-08 00:53:41.647965 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-08 00:53:41.647972 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-08 00:53:41.647982 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-08 00:53:41.647989 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-08 00:53:41.647996 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-08 00:53:41.648007 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-08 00:53:41.648013 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.648020 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.648027 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-08 00:53:41.648037 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-08 00:53:41.648043 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-08 00:53:41.648049 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.648055 | orchestrator | 2026-03-08 00:53:41.648061 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2026-03-08 00:53:41.648066 | orchestrator | Sunday 08 March 2026 00:48:06 +0000 (0:00:01.195) 0:00:56.531 ********** 2026-03-08 00:53:41.648075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-08 00:53:41.648082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-08 00:53:41.648093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-08 00:53:41.648099 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.648106 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-08 00:53:41.648117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-08 00:53:41.648123 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-08 00:53:41.648130 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.648137 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-08 00:53:41.648147 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-08 00:53:41.648160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-08 00:53:41.648167 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.648174 | orchestrator | 2026-03-08 00:53:41.648198 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-08 00:53:41.648205 | orchestrator | Sunday 08 March 2026 00:48:07 +0000 (0:00:01.819) 0:00:58.350 ********** 2026-03-08 00:53:41.648211 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-08 00:53:41.648223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-08 00:53:41.648230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-08 00:53:41.648237 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.648244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-08 00:53:41.648255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-08 00:53:41.648269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-08 00:53:41.648276 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.648282 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-08 00:53:41.648289 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-08 00:53:41.648299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-08 00:53:41.648306 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.648312 | orchestrator | 2026-03-08 00:53:41.648319 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-08 00:53:41.648325 | orchestrator | Sunday 08 March 2026 00:48:08 +0000 (0:00:00.968) 0:00:59.319 ********** 2026-03-08 00:53:41.648332 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-08 00:53:41.648342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-08 00:53:41.648355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-08 00:53:41.648361 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.648368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-08 00:53:41.648375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-08 00:53:41.648381 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-08 00:53:41.648392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-08 00:53:41.648798 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-08 00:53:41.648837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-08 00:53:41.648845 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.648853 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.648860 | orchestrator | 2026-03-08 00:53:41.648867 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-08 00:53:41.648874 | orchestrator | Sunday 08 March 2026 00:48:09 +0000 (0:00:00.640) 0:00:59.960 ********** 2026-03-08 00:53:41.648882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-08 00:53:41.648926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-08 00:53:41.648935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-08 00:53:41.648942 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.648949 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-08 00:53:41.648956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-08 00:53:41.648976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-08 00:53:41.648983 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.648993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-08 00:53:41.649000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-08 00:53:41.649007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-08 00:53:41.649014 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.649020 | orchestrator | 2026-03-08 00:53:41.649027 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2026-03-08 00:53:41.649034 | orchestrator | Sunday 08 March 2026 00:48:10 +0000 (0:00:00.916) 0:01:00.876 ********** 2026-03-08 00:53:41.649041 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-08 00:53:41.649067 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-08 00:53:41.649083 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-08 00:53:41.649094 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-08 00:53:41.649100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-08 00:53:41.649106 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-08 00:53:41.649113 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.649119 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.649126 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-08 00:53:41.649132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-08 00:53:41.649139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-08 00:53:41.649149 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.649156 | orchestrator | 2026-03-08 00:53:41.649162 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2026-03-08 00:53:41.649169 | orchestrator | Sunday 08 March 2026 00:48:11 +0000 (0:00:01.100) 0:01:01.977 ********** 2026-03-08 00:53:41.649217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-08 00:53:41.649229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-08 00:53:41.649235 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-08 00:53:41.649241 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.649247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-08 00:53:41.649252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-08 00:53:41.649259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-08 00:53:41.649269 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.649279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-08 00:53:41.649353 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-08 00:53:41.649361 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-08 00:53:41.649368 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.649375 | orchestrator | 2026-03-08 00:53:41.649382 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2026-03-08 00:53:41.649389 | orchestrator | Sunday 08 March 2026 00:48:12 +0000 (0:00:00.607) 0:01:02.584 ********** 2026-03-08 00:53:41.649396 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-08 00:53:41.649403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-08 00:53:41.649410 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-08 00:53:41.649422 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.649429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-08 00:53:41.649441 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-08 00:53:41.649451 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-08 00:53:41.649459 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.649483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-08 00:53:41.649492 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-08 00:53:41.649499 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-08 00:53:41.649513 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.649521 | orchestrator | 2026-03-08 00:53:41.649528 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-03-08 00:53:41.649535 | orchestrator | Sunday 08 March 2026 00:48:12 +0000 (0:00:00.814) 0:01:03.398 ********** 2026-03-08 00:53:41.649542 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-08 00:53:41.649549 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-08 00:53:41.649556 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-08 00:53:41.649563 | orchestrator | 2026-03-08 00:53:41.649570 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-03-08 00:53:41.649577 | orchestrator | Sunday 08 March 2026 00:48:14 +0000 (0:00:01.771) 0:01:05.170 ********** 2026-03-08 00:53:41.649584 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-08 00:53:41.649591 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-08 00:53:41.649598 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-08 00:53:41.649605 | orchestrator | 2026-03-08 00:53:41.649612 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-03-08 00:53:41.649618 | orchestrator | Sunday 08 March 2026 00:48:16 +0000 (0:00:01.661) 0:01:06.831 ********** 2026-03-08 00:53:41.649625 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-08 00:53:41.649635 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-08 00:53:41.649642 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-08 00:53:41.649649 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.649656 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-08 00:53:41.649663 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-08 00:53:41.649669 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.649676 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-08 00:53:41.649683 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.649689 | orchestrator | 2026-03-08 00:53:41.649699 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2026-03-08 00:53:41.649706 | orchestrator | Sunday 08 March 2026 00:48:17 +0000 (0:00:01.092) 0:01:07.924 ********** 2026-03-08 00:53:41.649713 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-08 00:53:41.649719 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-08 00:53:41.649731 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-08 00:53:41.649738 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-08 00:53:41.649765 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-08 00:53:41.649776 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-08 00:53:41.649787 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-08 00:53:41.649794 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-08 00:53:41.649806 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-08 00:53:41.649813 | orchestrator | 2026-03-08 00:53:41.649819 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-03-08 00:53:41.649827 | orchestrator | Sunday 08 March 2026 00:48:20 +0000 (0:00:02.842) 0:01:10.766 ********** 2026-03-08 00:53:41.649833 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:53:41.649840 | orchestrator | 2026-03-08 00:53:41.649847 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-03-08 00:53:41.649853 | orchestrator | Sunday 08 March 2026 00:48:21 +0000 (0:00:00.771) 0:01:11.538 ********** 2026-03-08 00:53:41.649861 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-08 00:53:41.649869 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-08 00:53:41.649880 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.649932 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-08 00:53:41.649951 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.649958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-08 00:53:41.649965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.649971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.649981 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-08 00:53:41.649991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-08 00:53:41.649998 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.650009 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.650122 | orchestrator | 2026-03-08 00:53:41.650129 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-03-08 00:53:41.650135 | orchestrator | Sunday 08 March 2026 00:48:26 +0000 (0:00:04.970) 0:01:16.508 ********** 2026-03-08 00:53:41.650142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-08 00:53:41.650148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-08 00:53:41.650160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.650168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.650175 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.650231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-08 00:53:41.650238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-08 00:53:41.650244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.650251 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.650257 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.650268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-08 00:53:41.650275 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-08 00:53:41.650289 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.650296 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.650302 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.650309 | orchestrator | 2026-03-08 00:53:41.650315 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-03-08 00:53:41.650322 | orchestrator | Sunday 08 March 2026 00:48:27 +0000 (0:00:01.301) 0:01:17.810 ********** 2026-03-08 00:53:41.650329 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-08 00:53:41.650337 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-08 00:53:41.650344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-08 00:53:41.650352 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.650359 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-08 00:53:41.650365 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.650371 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-08 00:53:41.650378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-08 00:53:41.650385 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.650391 | orchestrator | 2026-03-08 00:53:41.650398 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-03-08 00:53:41.650404 | orchestrator | Sunday 08 March 2026 00:48:28 +0000 (0:00:01.012) 0:01:18.822 ********** 2026-03-08 00:53:41.650410 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:53:41.650416 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:53:41.650423 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:53:41.650429 | orchestrator | 2026-03-08 00:53:41.650435 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-03-08 00:53:41.650442 | orchestrator | Sunday 08 March 2026 00:48:29 +0000 (0:00:01.360) 0:01:20.183 ********** 2026-03-08 00:53:41.650449 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:53:41.650455 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:53:41.650461 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:53:41.650471 | orchestrator | 2026-03-08 00:53:41.650478 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-03-08 00:53:41.650485 | orchestrator | Sunday 08 March 2026 00:48:31 +0000 (0:00:02.211) 0:01:22.394 ********** 2026-03-08 00:53:41.650491 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:53:41.650497 | orchestrator | 2026-03-08 00:53:41.650507 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-03-08 00:53:41.650514 | orchestrator | Sunday 08 March 2026 00:48:32 +0000 (0:00:00.866) 0:01:23.260 ********** 2026-03-08 00:53:41.650523 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-08 00:53:41.650531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.650539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.650545 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-08 00:53:41.650552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.650571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.650580 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-08 00:53:41.650586 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.650592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.650598 | orchestrator | 2026-03-08 00:53:41.650605 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-03-08 00:53:41.650612 | orchestrator | Sunday 08 March 2026 00:48:38 +0000 (0:00:05.986) 0:01:29.247 ********** 2026-03-08 00:53:41.650692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-08 00:53:41.650711 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.650723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.650730 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.650737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-08 00:53:41.650744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.650751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.650758 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.650770 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-08 00:53:41.650781 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.650792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.650798 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.650806 | orchestrator | 2026-03-08 00:53:41.650812 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-03-08 00:53:41.650819 | orchestrator | Sunday 08 March 2026 00:48:39 +0000 (0:00:00.937) 0:01:30.184 ********** 2026-03-08 00:53:41.650826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-08 00:53:41.650835 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-08 00:53:41.650843 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.650849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-08 00:53:41.650857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-08 00:53:41.650864 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.650870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-08 00:53:41.650877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-08 00:53:41.650889 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.650896 | orchestrator | 2026-03-08 00:53:41.650902 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-03-08 00:53:41.650909 | orchestrator | Sunday 08 March 2026 00:48:41 +0000 (0:00:01.554) 0:01:31.739 ********** 2026-03-08 00:53:41.650916 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:53:41.650923 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:53:41.650929 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:53:41.650936 | orchestrator | 2026-03-08 00:53:41.650942 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-03-08 00:53:41.650949 | orchestrator | Sunday 08 March 2026 00:48:42 +0000 (0:00:01.541) 0:01:33.281 ********** 2026-03-08 00:53:41.650956 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:53:41.650963 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:53:41.650969 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:53:41.650975 | orchestrator | 2026-03-08 00:53:41.650982 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-03-08 00:53:41.650989 | orchestrator | Sunday 08 March 2026 00:48:44 +0000 (0:00:02.032) 0:01:35.313 ********** 2026-03-08 00:53:41.650995 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.651002 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.651009 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.651015 | orchestrator | 2026-03-08 00:53:41.651022 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-03-08 00:53:41.651029 | orchestrator | Sunday 08 March 2026 00:48:45 +0000 (0:00:00.430) 0:01:35.744 ********** 2026-03-08 00:53:41.651035 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:53:41.651042 | orchestrator | 2026-03-08 00:53:41.651048 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-03-08 00:53:41.651055 | orchestrator | Sunday 08 March 2026 00:48:46 +0000 (0:00:00.916) 0:01:36.661 ********** 2026-03-08 00:53:41.651067 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-03-08 00:53:41.651077 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-03-08 00:53:41.651082 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-03-08 00:53:41.651089 | orchestrator | 2026-03-08 00:53:41.651093 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-03-08 00:53:41.651100 | orchestrator | Sunday 08 March 2026 00:48:49 +0000 (0:00:02.997) 0:01:39.658 ********** 2026-03-08 00:53:41.651106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-03-08 00:53:41.651113 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.651123 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-03-08 00:53:41.651129 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.651139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-03-08 00:53:41.651145 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.651151 | orchestrator | 2026-03-08 00:53:41.651158 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-03-08 00:53:41.651164 | orchestrator | Sunday 08 March 2026 00:48:51 +0000 (0:00:01.965) 0:01:41.623 ********** 2026-03-08 00:53:41.651171 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-08 00:53:41.651201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-08 00:53:41.651207 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.651215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-08 00:53:41.651221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-08 00:53:41.651227 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.651231 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-08 00:53:41.651235 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-08 00:53:41.651239 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.651243 | orchestrator | 2026-03-08 00:53:41.651247 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-03-08 00:53:41.651250 | orchestrator | Sunday 08 March 2026 00:48:53 +0000 (0:00:02.451) 0:01:44.075 ********** 2026-03-08 00:53:41.651254 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.651258 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.651265 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.651269 | orchestrator | 2026-03-08 00:53:41.651273 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-03-08 00:53:41.651277 | orchestrator | Sunday 08 March 2026 00:48:54 +0000 (0:00:00.960) 0:01:45.036 ********** 2026-03-08 00:53:41.651280 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.651284 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.651288 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.651292 | orchestrator | 2026-03-08 00:53:41.651355 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-03-08 00:53:41.651360 | orchestrator | Sunday 08 March 2026 00:48:56 +0000 (0:00:01.774) 0:01:46.810 ********** 2026-03-08 00:53:41.651364 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:53:41.651368 | orchestrator | 2026-03-08 00:53:41.651372 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-03-08 00:53:41.651383 | orchestrator | Sunday 08 March 2026 00:48:57 +0000 (0:00:00.802) 0:01:47.613 ********** 2026-03-08 00:53:41.651387 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-08 00:53:41.651392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.651397 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.651402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.651411 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-08 00:53:41.651426 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.651431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.651435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.651439 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-08 00:53:41.651443 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.651466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.651481 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.651485 | orchestrator | 2026-03-08 00:53:41.651489 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-03-08 00:53:41.651494 | orchestrator | Sunday 08 March 2026 00:49:02 +0000 (0:00:05.458) 0:01:53.071 ********** 2026-03-08 00:53:41.651500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-08 00:53:41.651507 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.651514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.651524 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.651536 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.651546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-08 00:53:41.651553 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.651560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.651566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.651570 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.651574 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-08 00:53:41.651584 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.651591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.651595 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.651599 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.651603 | orchestrator | 2026-03-08 00:53:41.651607 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-03-08 00:53:41.651611 | orchestrator | Sunday 08 March 2026 00:49:03 +0000 (0:00:01.072) 0:01:54.144 ********** 2026-03-08 00:53:41.651625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-08 00:53:41.651629 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-08 00:53:41.651634 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.651638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-08 00:53:41.651648 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-08 00:53:41.651652 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.651656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-08 00:53:41.651660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-08 00:53:41.651667 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.651671 | orchestrator | 2026-03-08 00:53:41.651684 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-03-08 00:53:41.651689 | orchestrator | Sunday 08 March 2026 00:49:05 +0000 (0:00:01.525) 0:01:55.670 ********** 2026-03-08 00:53:41.651692 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:53:41.651696 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:53:41.651700 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:53:41.651704 | orchestrator | 2026-03-08 00:53:41.651708 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-03-08 00:53:41.651712 | orchestrator | Sunday 08 March 2026 00:49:07 +0000 (0:00:01.784) 0:01:57.455 ********** 2026-03-08 00:53:41.651715 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:53:41.651719 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:53:41.651723 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:53:41.651727 | orchestrator | 2026-03-08 00:53:41.651733 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-03-08 00:53:41.651737 | orchestrator | Sunday 08 March 2026 00:49:09 +0000 (0:00:01.975) 0:01:59.430 ********** 2026-03-08 00:53:41.651741 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.651745 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.651749 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.651752 | orchestrator | 2026-03-08 00:53:41.651756 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-03-08 00:53:41.651782 | orchestrator | Sunday 08 March 2026 00:49:09 +0000 (0:00:00.457) 0:01:59.888 ********** 2026-03-08 00:53:41.651787 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.651791 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.651794 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.651798 | orchestrator | 2026-03-08 00:53:41.651802 | orchestrator | TASK [include_role : designate] ************************************************ 2026-03-08 00:53:41.651809 | orchestrator | Sunday 08 March 2026 00:49:09 +0000 (0:00:00.284) 0:02:00.172 ********** 2026-03-08 00:53:41.651813 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:53:41.651816 | orchestrator | 2026-03-08 00:53:41.651820 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-03-08 00:53:41.651824 | orchestrator | Sunday 08 March 2026 00:49:10 +0000 (0:00:00.859) 0:02:01.031 ********** 2026-03-08 00:53:41.651828 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-08 00:53:41.651833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-08 00:53:41.651840 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.651845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.652393 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.652425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.652430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.652435 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-08 00:53:41.652439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-08 00:53:41.652451 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.652455 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.652467 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-08 00:53:41.652472 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.652475 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-08 00:53:41.652479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.652486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.652490 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.652497 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.652504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.652508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.652512 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.652521 | orchestrator | 2026-03-08 00:53:41.652525 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-03-08 00:53:41.652529 | orchestrator | Sunday 08 March 2026 00:49:14 +0000 (0:00:04.330) 0:02:05.361 ********** 2026-03-08 00:53:41.652534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-08 00:53:41.652615 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-08 00:53:41.652630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.652641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.652646 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.652650 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.652673 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.652678 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.652682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-08 00:53:41.652688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-08 00:53:41.652693 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.652699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.652703 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.652710 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.652714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.652718 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.652722 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-08 00:53:41.652729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-08 00:53:41.652735 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.652739 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.652746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.652750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.652754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.652758 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.652762 | orchestrator | 2026-03-08 00:53:41.652765 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-03-08 00:53:41.652769 | orchestrator | Sunday 08 March 2026 00:49:16 +0000 (0:00:01.154) 0:02:06.516 ********** 2026-03-08 00:53:41.652773 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-08 00:53:41.652778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-08 00:53:41.652782 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.652788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-08 00:53:41.652792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-08 00:53:41.652796 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.652800 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-08 00:53:41.652806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-08 00:53:41.652812 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.652816 | orchestrator | 2026-03-08 00:53:41.652820 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-03-08 00:53:41.652824 | orchestrator | Sunday 08 March 2026 00:49:17 +0000 (0:00:01.326) 0:02:07.842 ********** 2026-03-08 00:53:41.652828 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:53:41.652832 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:53:41.652835 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:53:41.652839 | orchestrator | 2026-03-08 00:53:41.652843 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-03-08 00:53:41.652847 | orchestrator | Sunday 08 March 2026 00:49:19 +0000 (0:00:02.098) 0:02:09.940 ********** 2026-03-08 00:53:41.652850 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:53:41.652854 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:53:41.652858 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:53:41.652861 | orchestrator | 2026-03-08 00:53:41.652865 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-03-08 00:53:41.652869 | orchestrator | Sunday 08 March 2026 00:49:21 +0000 (0:00:02.151) 0:02:12.092 ********** 2026-03-08 00:53:41.652873 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.652876 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.652880 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.652884 | orchestrator | 2026-03-08 00:53:41.652888 | orchestrator | TASK [include_role : glance] *************************************************** 2026-03-08 00:53:41.652891 | orchestrator | Sunday 08 March 2026 00:49:22 +0000 (0:00:00.568) 0:02:12.661 ********** 2026-03-08 00:53:41.652895 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:53:41.652899 | orchestrator | 2026-03-08 00:53:41.652903 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-03-08 00:53:41.652906 | orchestrator | Sunday 08 March 2026 00:49:23 +0000 (0:00:00.829) 0:02:13.491 ********** 2026-03-08 00:53:41.652912 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-08 00:53:41.652923 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-08 00:53:41.652931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-08 00:53:41.652942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-08 00:53:41.652950 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-08 00:53:41.652958 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-08 00:53:41.652965 | orchestrator | 2026-03-08 00:53:41.652969 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-03-08 00:53:41.652984 | orchestrator | Sunday 08 March 2026 00:49:28 +0000 (0:00:05.877) 0:02:19.368 ********** 2026-03-08 00:53:41.652989 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-08 00:53:41.652996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-08 00:53:41.653006 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.653013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-08 00:53:41.653018 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-08 00:53:41.653026 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.653037 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-08 00:53:41.653042 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-08 00:53:41.653047 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.653051 | orchestrator | 2026-03-08 00:53:41.653056 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-03-08 00:53:41.653060 | orchestrator | Sunday 08 March 2026 00:49:32 +0000 (0:00:03.476) 0:02:22.845 ********** 2026-03-08 00:53:41.653065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-08 00:53:41.653076 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-08 00:53:41.653080 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.653087 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-08 00:53:41.653092 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-08 00:53:41.653096 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.653101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-08 00:53:41.653105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-08 00:53:41.653110 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.653114 | orchestrator | 2026-03-08 00:53:41.653118 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-03-08 00:53:41.653123 | orchestrator | Sunday 08 March 2026 00:49:35 +0000 (0:00:03.454) 0:02:26.300 ********** 2026-03-08 00:53:41.653127 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:53:41.653131 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:53:41.653136 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:53:41.653140 | orchestrator | 2026-03-08 00:53:41.653144 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-03-08 00:53:41.653149 | orchestrator | Sunday 08 March 2026 00:49:37 +0000 (0:00:01.418) 0:02:27.719 ********** 2026-03-08 00:53:41.653156 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:53:41.653160 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:53:41.653165 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:53:41.653169 | orchestrator | 2026-03-08 00:53:41.653174 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-03-08 00:53:41.653198 | orchestrator | Sunday 08 March 2026 00:49:39 +0000 (0:00:02.233) 0:02:29.952 ********** 2026-03-08 00:53:41.653203 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.653207 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.653212 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.653216 | orchestrator | 2026-03-08 00:53:41.653220 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-03-08 00:53:41.653225 | orchestrator | Sunday 08 March 2026 00:49:40 +0000 (0:00:00.569) 0:02:30.522 ********** 2026-03-08 00:53:41.653229 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:53:41.653234 | orchestrator | 2026-03-08 00:53:41.653238 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-03-08 00:53:41.653242 | orchestrator | Sunday 08 March 2026 00:49:41 +0000 (0:00:00.973) 0:02:31.496 ********** 2026-03-08 00:53:41.653250 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-08 00:53:41.653258 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-08 00:53:41.653263 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-08 00:53:41.653267 | orchestrator | 2026-03-08 00:53:41.653272 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-03-08 00:53:41.653276 | orchestrator | Sunday 08 March 2026 00:49:45 +0000 (0:00:04.096) 0:02:35.592 ********** 2026-03-08 00:53:41.653280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-08 00:53:41.653288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-08 00:53:41.653293 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.653297 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.653302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-08 00:53:41.653306 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.653311 | orchestrator | 2026-03-08 00:53:41.653318 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-03-08 00:53:41.653323 | orchestrator | Sunday 08 March 2026 00:49:45 +0000 (0:00:00.686) 0:02:36.278 ********** 2026-03-08 00:53:41.653327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-08 00:53:41.653332 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-08 00:53:41.653337 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.653344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-08 00:53:41.653391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-08 00:53:41.653395 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.653398 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-08 00:53:41.653402 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-08 00:53:41.653406 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.653410 | orchestrator | 2026-03-08 00:53:41.653414 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-03-08 00:53:41.653417 | orchestrator | Sunday 08 March 2026 00:49:46 +0000 (0:00:00.635) 0:02:36.914 ********** 2026-03-08 00:53:41.653421 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:53:41.653425 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:53:41.653428 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:53:41.653435 | orchestrator | 2026-03-08 00:53:41.653439 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-03-08 00:53:41.653443 | orchestrator | Sunday 08 March 2026 00:49:47 +0000 (0:00:01.244) 0:02:38.159 ********** 2026-03-08 00:53:41.653447 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:53:41.653451 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:53:41.653454 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:53:41.653458 | orchestrator | 2026-03-08 00:53:41.653462 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-03-08 00:53:41.653466 | orchestrator | Sunday 08 March 2026 00:49:49 +0000 (0:00:01.928) 0:02:40.087 ********** 2026-03-08 00:53:41.653469 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.653473 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.653477 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.653480 | orchestrator | 2026-03-08 00:53:41.653484 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-03-08 00:53:41.653488 | orchestrator | Sunday 08 March 2026 00:49:50 +0000 (0:00:00.595) 0:02:40.682 ********** 2026-03-08 00:53:41.653492 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:53:41.653495 | orchestrator | 2026-03-08 00:53:41.653499 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-03-08 00:53:41.653503 | orchestrator | Sunday 08 March 2026 00:49:51 +0000 (0:00:01.066) 0:02:41.749 ********** 2026-03-08 00:53:41.653513 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-08 00:53:41.653519 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-08 00:53:41.653687 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-08 00:53:41.653707 | orchestrator | 2026-03-08 00:53:41.653714 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-03-08 00:53:41.653726 | orchestrator | Sunday 08 March 2026 00:49:55 +0000 (0:00:03.834) 0:02:45.584 ********** 2026-03-08 00:53:41.653732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-08 00:53:41.653739 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.653752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-08 00:53:41.653765 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.653772 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-08 00:53:41.653778 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.653784 | orchestrator | 2026-03-08 00:53:41.653790 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-03-08 00:53:41.653796 | orchestrator | Sunday 08 March 2026 00:49:56 +0000 (0:00:01.218) 0:02:46.802 ********** 2026-03-08 00:53:41.653807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-08 00:53:41.653813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-08 00:53:41.653840 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-08 00:53:41.653850 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-08 00:53:41.653856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-08 00:53:41.653860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-08 00:53:41.653864 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.653868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-08 00:53:41.653872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-08 00:53:41.653876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-08 00:53:41.653880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-08 00:53:41.653883 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-08 00:53:41.653887 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.653891 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-08 00:53:41.653895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-08 00:53:41.653901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-08 00:53:41.653905 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-08 00:53:41.653909 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.653916 | orchestrator | 2026-03-08 00:53:41.653920 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-03-08 00:53:41.653924 | orchestrator | Sunday 08 March 2026 00:49:57 +0000 (0:00:00.962) 0:02:47.765 ********** 2026-03-08 00:53:41.653928 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:53:41.653932 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:53:41.653935 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:53:41.653939 | orchestrator | 2026-03-08 00:53:41.653943 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-03-08 00:53:41.653946 | orchestrator | Sunday 08 March 2026 00:49:58 +0000 (0:00:01.390) 0:02:49.155 ********** 2026-03-08 00:53:41.653952 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:53:41.653956 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:53:41.653960 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:53:41.653964 | orchestrator | 2026-03-08 00:53:41.653967 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-03-08 00:53:41.653971 | orchestrator | Sunday 08 March 2026 00:50:00 +0000 (0:00:02.110) 0:02:51.266 ********** 2026-03-08 00:53:41.653975 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.653978 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.653982 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.653986 | orchestrator | 2026-03-08 00:53:41.653990 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-03-08 00:53:41.653993 | orchestrator | Sunday 08 March 2026 00:50:01 +0000 (0:00:00.331) 0:02:51.597 ********** 2026-03-08 00:53:41.653997 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.654001 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.654004 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.654008 | orchestrator | 2026-03-08 00:53:41.654036 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-03-08 00:53:41.654041 | orchestrator | Sunday 08 March 2026 00:50:01 +0000 (0:00:00.560) 0:02:52.158 ********** 2026-03-08 00:53:41.654045 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:53:41.654049 | orchestrator | 2026-03-08 00:53:41.654053 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-03-08 00:53:41.654056 | orchestrator | Sunday 08 March 2026 00:50:02 +0000 (0:00:00.947) 0:02:53.105 ********** 2026-03-08 00:53:41.654061 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-08 00:53:41.654065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-08 00:53:41.654073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-08 00:53:41.654083 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-08 00:53:41.654088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-08 00:53:41.654092 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-08 00:53:41.654096 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-08 00:53:41.654100 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-08 00:53:41.654111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-08 00:53:41.654115 | orchestrator | 2026-03-08 00:53:41.654119 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-03-08 00:53:41.654123 | orchestrator | Sunday 08 March 2026 00:50:06 +0000 (0:00:03.856) 0:02:56.962 ********** 2026-03-08 00:53:41.654129 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-08 00:53:41.654133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-08 00:53:41.654137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-08 00:53:41.654141 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.654145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-08 00:53:41.654155 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-08 00:53:41.654162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-08 00:53:41.654166 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.654170 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-08 00:53:41.654174 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-08 00:53:41.654383 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-08 00:53:41.654400 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.654406 | orchestrator | 2026-03-08 00:53:41.654412 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-03-08 00:53:41.654417 | orchestrator | Sunday 08 March 2026 00:50:07 +0000 (0:00:00.624) 0:02:57.586 ********** 2026-03-08 00:53:41.654424 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-08 00:53:41.654431 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-08 00:53:41.654436 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.654459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-08 00:53:41.654466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-08 00:53:41.654472 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.654477 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-08 00:53:41.654488 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-08 00:53:41.654494 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.654500 | orchestrator | 2026-03-08 00:53:41.654505 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-03-08 00:53:41.654511 | orchestrator | Sunday 08 March 2026 00:50:08 +0000 (0:00:00.908) 0:02:58.495 ********** 2026-03-08 00:53:41.654517 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:53:41.654523 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:53:41.654529 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:53:41.654535 | orchestrator | 2026-03-08 00:53:41.654541 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-03-08 00:53:41.654548 | orchestrator | Sunday 08 March 2026 00:50:09 +0000 (0:00:01.446) 0:02:59.941 ********** 2026-03-08 00:53:41.654558 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:53:41.654564 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:53:41.654569 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:53:41.654575 | orchestrator | 2026-03-08 00:53:41.654581 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-03-08 00:53:41.654588 | orchestrator | Sunday 08 March 2026 00:50:11 +0000 (0:00:02.327) 0:03:02.268 ********** 2026-03-08 00:53:41.654594 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.654600 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.654606 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.654613 | orchestrator | 2026-03-08 00:53:41.654624 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-03-08 00:53:41.654628 | orchestrator | Sunday 08 March 2026 00:50:12 +0000 (0:00:00.555) 0:03:02.824 ********** 2026-03-08 00:53:41.654632 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:53:41.654635 | orchestrator | 2026-03-08 00:53:41.654639 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-03-08 00:53:41.654643 | orchestrator | Sunday 08 March 2026 00:50:13 +0000 (0:00:01.076) 0:03:03.901 ********** 2026-03-08 00:53:41.654647 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-08 00:53:41.654651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.654660 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-08 00:53:41.654669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.654674 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-08 00:53:41.654681 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.654685 | orchestrator | 2026-03-08 00:53:41.654689 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-03-08 00:53:41.654693 | orchestrator | Sunday 08 March 2026 00:50:17 +0000 (0:00:04.451) 0:03:08.352 ********** 2026-03-08 00:53:41.654697 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-08 00:53:41.654707 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.654711 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.654715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-08 00:53:41.654722 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.654726 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.654730 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-08 00:53:41.654734 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.654738 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.654742 | orchestrator | 2026-03-08 00:53:41.654748 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-03-08 00:53:41.654752 | orchestrator | Sunday 08 March 2026 00:50:18 +0000 (0:00:01.035) 0:03:09.388 ********** 2026-03-08 00:53:41.654756 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-08 00:53:41.654760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-08 00:53:41.654764 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.654771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-08 00:53:41.654775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-08 00:53:41.654782 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.654786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-08 00:53:41.654790 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-08 00:53:41.654793 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.654797 | orchestrator | 2026-03-08 00:53:41.654801 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-03-08 00:53:41.654805 | orchestrator | Sunday 08 March 2026 00:50:19 +0000 (0:00:00.926) 0:03:10.314 ********** 2026-03-08 00:53:41.654809 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:53:41.654812 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:53:41.654816 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:53:41.654820 | orchestrator | 2026-03-08 00:53:41.654824 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-03-08 00:53:41.654827 | orchestrator | Sunday 08 March 2026 00:50:21 +0000 (0:00:01.558) 0:03:11.873 ********** 2026-03-08 00:53:41.654856 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:53:41.654860 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:53:41.654864 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:53:41.654868 | orchestrator | 2026-03-08 00:53:41.654871 | orchestrator | TASK [include_role : manila] *************************************************** 2026-03-08 00:53:41.654875 | orchestrator | Sunday 08 March 2026 00:50:23 +0000 (0:00:02.321) 0:03:14.194 ********** 2026-03-08 00:53:41.654879 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:53:41.654883 | orchestrator | 2026-03-08 00:53:41.654887 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-03-08 00:53:41.654890 | orchestrator | Sunday 08 March 2026 00:50:25 +0000 (0:00:01.396) 0:03:15.591 ********** 2026-03-08 00:53:41.654894 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-08 00:53:41.654899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.654906 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.654918 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.654922 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-08 00:53:41.654926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.654930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.654934 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.654941 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-08 00:53:41.654951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.654955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.654959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.654963 | orchestrator | 2026-03-08 00:53:41.654967 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-03-08 00:53:41.654971 | orchestrator | Sunday 08 March 2026 00:50:28 +0000 (0:00:03.692) 0:03:19.284 ********** 2026-03-08 00:53:41.655158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-08 00:53:41.655164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.655231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.655247 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.655253 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.655259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-08 00:53:41.655265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.655271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.655277 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.655288 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.655312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-08 00:53:41.655324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.655330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.655336 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.655342 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.655348 | orchestrator | 2026-03-08 00:53:41.655354 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-03-08 00:53:41.655359 | orchestrator | Sunday 08 March 2026 00:50:29 +0000 (0:00:00.734) 0:03:20.019 ********** 2026-03-08 00:53:41.655365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-08 00:53:41.655372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-08 00:53:41.655378 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.655384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-08 00:53:41.655396 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-08 00:53:41.655402 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.655407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-08 00:53:41.655422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-08 00:53:41.655428 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.655660 | orchestrator | 2026-03-08 00:53:41.655673 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-03-08 00:53:41.655678 | orchestrator | Sunday 08 March 2026 00:50:30 +0000 (0:00:01.333) 0:03:21.352 ********** 2026-03-08 00:53:41.655699 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:53:41.655704 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:53:41.655708 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:53:41.655712 | orchestrator | 2026-03-08 00:53:41.655716 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-03-08 00:53:41.655720 | orchestrator | Sunday 08 March 2026 00:50:32 +0000 (0:00:01.274) 0:03:22.626 ********** 2026-03-08 00:53:41.655724 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:53:41.655727 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:53:41.655731 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:53:41.655735 | orchestrator | 2026-03-08 00:53:41.655739 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-03-08 00:53:41.655743 | orchestrator | Sunday 08 March 2026 00:50:34 +0000 (0:00:02.033) 0:03:24.660 ********** 2026-03-08 00:53:41.655747 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:53:41.655751 | orchestrator | 2026-03-08 00:53:41.655763 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-03-08 00:53:41.655767 | orchestrator | Sunday 08 March 2026 00:50:35 +0000 (0:00:01.329) 0:03:25.990 ********** 2026-03-08 00:53:41.655771 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-08 00:53:41.655775 | orchestrator | 2026-03-08 00:53:41.655779 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-03-08 00:53:41.655783 | orchestrator | Sunday 08 March 2026 00:50:38 +0000 (0:00:03.396) 0:03:29.386 ********** 2026-03-08 00:53:41.655788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-08 00:53:41.655820 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-08 00:53:41.655854 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.655875 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-08 00:53:41.655881 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-08 00:53:41.655885 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.655889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-08 00:53:41.655909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-08 00:53:41.655913 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.655917 | orchestrator | 2026-03-08 00:53:41.655921 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-03-08 00:53:41.655925 | orchestrator | Sunday 08 March 2026 00:50:41 +0000 (0:00:02.574) 0:03:31.961 ********** 2026-03-08 00:53:41.655932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-08 00:53:41.655940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-08 00:53:41.655944 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.655959 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-08 00:53:41.655966 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-08 00:53:41.655970 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.655974 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-08 00:53:41.655981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-08 00:53:41.655985 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.655989 | orchestrator | 2026-03-08 00:53:41.655993 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-03-08 00:53:41.655996 | orchestrator | Sunday 08 March 2026 00:50:44 +0000 (0:00:02.488) 0:03:34.449 ********** 2026-03-08 00:53:41.656011 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-08 00:53:41.656018 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-08 00:53:41.656022 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.656026 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-08 00:53:41.656030 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-08 00:53:41.656037 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.656071 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-08 00:53:41.656076 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-08 00:53:41.656080 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.656084 | orchestrator | 2026-03-08 00:53:41.656088 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-03-08 00:53:41.656092 | orchestrator | Sunday 08 March 2026 00:50:46 +0000 (0:00:02.869) 0:03:37.319 ********** 2026-03-08 00:53:41.656095 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:53:41.656099 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:53:41.656103 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:53:41.656106 | orchestrator | 2026-03-08 00:53:41.656110 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-03-08 00:53:41.656114 | orchestrator | Sunday 08 March 2026 00:50:48 +0000 (0:00:01.817) 0:03:39.137 ********** 2026-03-08 00:53:41.656118 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.656121 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.656125 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.656129 | orchestrator | 2026-03-08 00:53:41.656133 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-03-08 00:53:41.656136 | orchestrator | Sunday 08 March 2026 00:50:50 +0000 (0:00:01.408) 0:03:40.545 ********** 2026-03-08 00:53:41.656151 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.656156 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.656159 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.656163 | orchestrator | 2026-03-08 00:53:41.656167 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-03-08 00:53:41.656170 | orchestrator | Sunday 08 March 2026 00:50:50 +0000 (0:00:00.337) 0:03:40.883 ********** 2026-03-08 00:53:41.656174 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:53:41.656196 | orchestrator | 2026-03-08 00:53:41.656202 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-03-08 00:53:41.656226 | orchestrator | Sunday 08 March 2026 00:50:51 +0000 (0:00:01.383) 0:03:42.267 ********** 2026-03-08 00:53:41.656234 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-08 00:53:41.656333 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-08 00:53:41.656339 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-08 00:53:41.656343 | orchestrator | 2026-03-08 00:53:41.656349 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-03-08 00:53:41.656355 | orchestrator | Sunday 08 March 2026 00:50:53 +0000 (0:00:01.444) 0:03:43.711 ********** 2026-03-08 00:53:41.656361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-08 00:53:41.656384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-08 00:53:41.656391 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.656397 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.656413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-08 00:53:41.656420 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.656424 | orchestrator | 2026-03-08 00:53:41.656428 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-03-08 00:53:41.656431 | orchestrator | Sunday 08 March 2026 00:50:53 +0000 (0:00:00.392) 0:03:44.104 ********** 2026-03-08 00:53:41.656435 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-08 00:53:41.656440 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.656444 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-08 00:53:41.656448 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.656452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-08 00:53:41.656456 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.656459 | orchestrator | 2026-03-08 00:53:41.656463 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-03-08 00:53:41.656467 | orchestrator | Sunday 08 March 2026 00:50:54 +0000 (0:00:00.882) 0:03:44.986 ********** 2026-03-08 00:53:41.656471 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.656475 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.656478 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.656482 | orchestrator | 2026-03-08 00:53:41.656486 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-03-08 00:53:41.656489 | orchestrator | Sunday 08 March 2026 00:50:55 +0000 (0:00:00.436) 0:03:45.423 ********** 2026-03-08 00:53:41.656493 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.656497 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.656501 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.656504 | orchestrator | 2026-03-08 00:53:41.656508 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-03-08 00:53:41.656512 | orchestrator | Sunday 08 March 2026 00:50:56 +0000 (0:00:01.348) 0:03:46.772 ********** 2026-03-08 00:53:41.656515 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.656519 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.656523 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.656529 | orchestrator | 2026-03-08 00:53:41.656535 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-03-08 00:53:41.656541 | orchestrator | Sunday 08 March 2026 00:50:56 +0000 (0:00:00.326) 0:03:47.099 ********** 2026-03-08 00:53:41.656546 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:53:41.656552 | orchestrator | 2026-03-08 00:53:41.656558 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-03-08 00:53:41.656569 | orchestrator | Sunday 08 March 2026 00:50:58 +0000 (0:00:01.449) 0:03:48.548 ********** 2026-03-08 00:53:41.656615 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-08 00:53:41.656630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.656638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.656643 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.656679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-08 00:53:41.656689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.656709 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-08 00:53:41.656717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-08 00:53:41.656722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.656726 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-08 00:53:41.656730 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-08 00:53:41.656737 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.656803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-08 00:53:41.656811 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.656816 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-08 00:53:41.656820 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-08 00:53:41.656824 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.656833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.656848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.656856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-08 00:53:41.656860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.656865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.656874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-08 00:53:41.656889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.656894 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-08 00:53:41.656899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-08 00:53:41.656903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.656907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.656914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-08 00:53:41.656931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-08 00:53:41.656946 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-08 00:53:41.656954 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-08 00:53:41.656958 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.656963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.656967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-08 00:53:41.656974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-08 00:53:41.656989 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.656997 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.657002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-08 00:53:41.657007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-08 00:53:41.657011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-08 00:53:41.657020 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-08 00:53:41.657024 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.657039 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.657047 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-08 00:53:41.657052 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-08 00:53:41.657056 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-08 00:53:41.657066 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-08 00:53:41.657070 | orchestrator | 2026-03-08 00:53:41.657075 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-03-08 00:53:41.657079 | orchestrator | Sunday 08 March 2026 00:51:02 +0000 (0:00:04.274) 0:03:52.823 ********** 2026-03-08 00:53:41.657095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-08 00:53:41.657103 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.657107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.657112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.657120 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-08 00:53:41.657125 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.657140 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-08 00:53:41.657148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-08 00:53:41.657153 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.657157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-08 00:53:41.657171 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-08 00:53:41.657217 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.657236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.657244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.657250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-08 00:53:41.657258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.657429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-08 00:53:41.657436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-08 00:53:41.657450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.657457 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.657462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-08 00:53:41.657469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-08 00:53:41.657473 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-08 00:53:41.657478 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-08 00:53:41.657482 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.657495 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.657502 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-08 00:53:41.657506 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-08 00:53:41.657513 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.657518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.657521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.657539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-08 00:53:41.657549 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-08 00:53:41.657555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.657566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.657572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-08 00:53:41.657578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-08 00:53:41.657597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.657607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-08 00:53:41.657617 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-08 00:53:41.657623 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.657629 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-08 00:53:41.657635 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.657641 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-08 00:53:41.657659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.657665 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-08 00:53:41.657677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-08 00:53:41.657688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.657695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-08 00:53:41.657700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-08 00:53:41.657704 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.657708 | orchestrator | 2026-03-08 00:53:41.657712 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-03-08 00:53:41.657716 | orchestrator | Sunday 08 March 2026 00:51:03 +0000 (0:00:01.531) 0:03:54.355 ********** 2026-03-08 00:53:41.657720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-08 00:53:41.657724 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-08 00:53:41.657728 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.657743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-08 00:53:41.657747 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-08 00:53:41.657754 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.657759 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-08 00:53:41.657765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-08 00:53:41.657769 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.657773 | orchestrator | 2026-03-08 00:53:41.657777 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-03-08 00:53:41.657781 | orchestrator | Sunday 08 March 2026 00:51:05 +0000 (0:00:02.020) 0:03:56.376 ********** 2026-03-08 00:53:41.657784 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:53:41.657788 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:53:41.657792 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:53:41.657795 | orchestrator | 2026-03-08 00:53:41.657799 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-03-08 00:53:41.657803 | orchestrator | Sunday 08 March 2026 00:51:07 +0000 (0:00:01.366) 0:03:57.742 ********** 2026-03-08 00:53:41.657807 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:53:41.657810 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:53:41.657814 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:53:41.657818 | orchestrator | 2026-03-08 00:53:41.657821 | orchestrator | TASK [include_role : placement] ************************************************ 2026-03-08 00:53:41.657825 | orchestrator | Sunday 08 March 2026 00:51:09 +0000 (0:00:01.974) 0:03:59.717 ********** 2026-03-08 00:53:41.657829 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:53:41.657832 | orchestrator | 2026-03-08 00:53:41.657836 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-03-08 00:53:41.657840 | orchestrator | Sunday 08 March 2026 00:51:10 +0000 (0:00:01.106) 0:04:00.824 ********** 2026-03-08 00:53:41.657844 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-08 00:53:41.657849 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-08 00:53:41.657993 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-08 00:53:41.658004 | orchestrator | 2026-03-08 00:53:41.658008 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-03-08 00:53:41.658037 | orchestrator | Sunday 08 March 2026 00:51:13 +0000 (0:00:03.341) 0:04:04.165 ********** 2026-03-08 00:53:41.658045 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-08 00:53:41.658078 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.658084 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-08 00:53:41.658088 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.658092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-08 00:53:41.658096 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.658103 | orchestrator | 2026-03-08 00:53:41.658107 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-03-08 00:53:41.658111 | orchestrator | Sunday 08 March 2026 00:51:14 +0000 (0:00:00.471) 0:04:04.636 ********** 2026-03-08 00:53:41.658115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-08 00:53:41.658120 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-08 00:53:41.658124 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.658142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-08 00:53:41.658147 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-08 00:53:41.658150 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.658154 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-08 00:53:41.658161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-08 00:53:41.658165 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.658169 | orchestrator | 2026-03-08 00:53:41.658173 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-03-08 00:53:41.658195 | orchestrator | Sunday 08 March 2026 00:51:14 +0000 (0:00:00.712) 0:04:05.348 ********** 2026-03-08 00:53:41.658199 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:53:41.658203 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:53:41.658207 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:53:41.658210 | orchestrator | 2026-03-08 00:53:41.658214 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-03-08 00:53:41.658218 | orchestrator | Sunday 08 March 2026 00:51:16 +0000 (0:00:01.807) 0:04:07.156 ********** 2026-03-08 00:53:41.658222 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:53:41.658225 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:53:41.658229 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:53:41.658233 | orchestrator | 2026-03-08 00:53:41.658237 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-03-08 00:53:41.658240 | orchestrator | Sunday 08 March 2026 00:51:18 +0000 (0:00:01.810) 0:04:08.966 ********** 2026-03-08 00:53:41.658244 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:53:41.658248 | orchestrator | 2026-03-08 00:53:41.658252 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-03-08 00:53:41.658255 | orchestrator | Sunday 08 March 2026 00:51:19 +0000 (0:00:01.323) 0:04:10.289 ********** 2026-03-08 00:53:41.658261 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-08 00:53:41.658269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.658284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.658291 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-08 00:53:41.658296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.658300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.658307 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-08 00:53:41.658321 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.658328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.658332 | orchestrator | 2026-03-08 00:53:41.658336 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-03-08 00:53:41.658340 | orchestrator | Sunday 08 March 2026 00:51:23 +0000 (0:00:04.000) 0:04:14.290 ********** 2026-03-08 00:53:41.658344 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-08 00:53:41.658352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.658390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.658394 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.658423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-08 00:53:41.658428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.658432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.658436 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.658443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-08 00:53:41.658448 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.658461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.658465 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.658469 | orchestrator | 2026-03-08 00:53:41.658643 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-03-08 00:53:41.658654 | orchestrator | Sunday 08 March 2026 00:51:24 +0000 (0:00:00.949) 0:04:15.240 ********** 2026-03-08 00:53:41.658661 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-08 00:53:41.658675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-08 00:53:41.658682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-08 00:53:41.658688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-08 00:53:41.658695 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.658701 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-08 00:53:41.658713 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-08 00:53:41.658719 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-08 00:53:41.658725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-08 00:53:41.658731 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-08 00:53:41.658737 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.658742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-08 00:53:41.658747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-08 00:53:41.658753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-08 00:53:41.658759 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.658765 | orchestrator | 2026-03-08 00:53:41.658771 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-03-08 00:53:41.658820 | orchestrator | Sunday 08 March 2026 00:51:25 +0000 (0:00:00.858) 0:04:16.099 ********** 2026-03-08 00:53:41.658827 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:53:41.658830 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:53:41.658834 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:53:41.658838 | orchestrator | 2026-03-08 00:53:41.658842 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-03-08 00:53:41.658846 | orchestrator | Sunday 08 March 2026 00:51:26 +0000 (0:00:01.250) 0:04:17.350 ********** 2026-03-08 00:53:41.658849 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:53:41.658853 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:53:41.658857 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:53:41.658861 | orchestrator | 2026-03-08 00:53:41.658865 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-03-08 00:53:41.658869 | orchestrator | Sunday 08 March 2026 00:51:29 +0000 (0:00:02.228) 0:04:19.579 ********** 2026-03-08 00:53:41.658872 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:53:41.658876 | orchestrator | 2026-03-08 00:53:41.658880 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-03-08 00:53:41.658897 | orchestrator | Sunday 08 March 2026 00:51:30 +0000 (0:00:01.585) 0:04:21.164 ********** 2026-03-08 00:53:41.658902 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-03-08 00:53:41.658906 | orchestrator | 2026-03-08 00:53:41.658910 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-03-08 00:53:41.658913 | orchestrator | Sunday 08 March 2026 00:51:31 +0000 (0:00:00.916) 0:04:22.081 ********** 2026-03-08 00:53:41.658921 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-08 00:53:41.658931 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-08 00:53:41.658951 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-08 00:53:41.658956 | orchestrator | 2026-03-08 00:53:41.658960 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-03-08 00:53:41.658964 | orchestrator | Sunday 08 March 2026 00:51:36 +0000 (0:00:04.595) 0:04:26.676 ********** 2026-03-08 00:53:41.658968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-08 00:53:41.658972 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.658976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-08 00:53:41.658980 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.658984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-08 00:53:41.658988 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.658992 | orchestrator | 2026-03-08 00:53:41.658996 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-03-08 00:53:41.659000 | orchestrator | Sunday 08 March 2026 00:51:37 +0000 (0:00:01.076) 0:04:27.752 ********** 2026-03-08 00:53:41.659114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-08 00:53:41.659124 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-08 00:53:41.659136 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.659142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-08 00:53:41.659153 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-08 00:53:41.659160 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.659166 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-08 00:53:41.659172 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-08 00:53:41.659196 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.659200 | orchestrator | 2026-03-08 00:53:41.659204 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-08 00:53:41.659208 | orchestrator | Sunday 08 March 2026 00:51:38 +0000 (0:00:01.525) 0:04:29.278 ********** 2026-03-08 00:53:41.659212 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:53:41.659215 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:53:41.659219 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:53:41.659223 | orchestrator | 2026-03-08 00:53:41.659227 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-08 00:53:41.659230 | orchestrator | Sunday 08 March 2026 00:51:41 +0000 (0:00:02.334) 0:04:31.613 ********** 2026-03-08 00:53:41.659234 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:53:41.659238 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:53:41.659242 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:53:41.659245 | orchestrator | 2026-03-08 00:53:41.659249 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-03-08 00:53:41.659253 | orchestrator | Sunday 08 March 2026 00:51:43 +0000 (0:00:02.727) 0:04:34.340 ********** 2026-03-08 00:53:41.659257 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-03-08 00:53:41.659261 | orchestrator | 2026-03-08 00:53:41.659265 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-03-08 00:53:41.659269 | orchestrator | Sunday 08 March 2026 00:51:45 +0000 (0:00:01.198) 0:04:35.539 ********** 2026-03-08 00:53:41.659273 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-08 00:53:41.659278 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.659281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-08 00:53:41.659289 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.659308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-08 00:53:41.659312 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.659316 | orchestrator | 2026-03-08 00:53:41.659320 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-03-08 00:53:41.659324 | orchestrator | Sunday 08 March 2026 00:51:46 +0000 (0:00:01.065) 0:04:36.605 ********** 2026-03-08 00:53:41.659331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-08 00:53:41.659335 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.659339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-08 00:53:41.659343 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.659347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-08 00:53:41.659350 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.659354 | orchestrator | 2026-03-08 00:53:41.659358 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-03-08 00:53:41.659362 | orchestrator | Sunday 08 March 2026 00:51:47 +0000 (0:00:01.125) 0:04:37.730 ********** 2026-03-08 00:53:41.659366 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.659369 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.659374 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.659377 | orchestrator | 2026-03-08 00:53:41.659381 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-08 00:53:41.659385 | orchestrator | Sunday 08 March 2026 00:51:48 +0000 (0:00:01.520) 0:04:39.251 ********** 2026-03-08 00:53:41.659389 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:53:41.659393 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:53:41.659397 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:53:41.659401 | orchestrator | 2026-03-08 00:53:41.659404 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-08 00:53:41.659412 | orchestrator | Sunday 08 March 2026 00:51:51 +0000 (0:00:02.255) 0:04:41.507 ********** 2026-03-08 00:53:41.659416 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:53:41.659420 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:53:41.659423 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:53:41.659427 | orchestrator | 2026-03-08 00:53:41.659431 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-03-08 00:53:41.659435 | orchestrator | Sunday 08 March 2026 00:51:53 +0000 (0:00:02.648) 0:04:44.156 ********** 2026-03-08 00:53:41.659439 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-03-08 00:53:41.659443 | orchestrator | 2026-03-08 00:53:41.659447 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-03-08 00:53:41.659451 | orchestrator | Sunday 08 March 2026 00:51:54 +0000 (0:00:00.776) 0:04:44.933 ********** 2026-03-08 00:53:41.659455 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-08 00:53:41.659459 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.659474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backen2026-03-08 00:53:41 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:53:41.659479 | orchestrator | d_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-08 00:53:41.659483 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.659490 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-08 00:53:41.659494 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.659498 | orchestrator | 2026-03-08 00:53:41.659502 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-03-08 00:53:41.659506 | orchestrator | Sunday 08 March 2026 00:51:55 +0000 (0:00:01.262) 0:04:46.195 ********** 2026-03-08 00:53:41.659510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-08 00:53:41.659514 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.659518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-08 00:53:41.659526 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.659532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-08 00:53:41.659538 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.659544 | orchestrator | 2026-03-08 00:53:41.659550 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-03-08 00:53:41.659556 | orchestrator | Sunday 08 March 2026 00:51:56 +0000 (0:00:01.177) 0:04:47.373 ********** 2026-03-08 00:53:41.659561 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.659567 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.659573 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.659579 | orchestrator | 2026-03-08 00:53:41.659584 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-08 00:53:41.659590 | orchestrator | Sunday 08 March 2026 00:51:58 +0000 (0:00:01.356) 0:04:48.729 ********** 2026-03-08 00:53:41.659596 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:53:41.659602 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:53:41.659608 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:53:41.659614 | orchestrator | 2026-03-08 00:53:41.659620 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-08 00:53:41.659627 | orchestrator | Sunday 08 March 2026 00:52:00 +0000 (0:00:02.275) 0:04:51.005 ********** 2026-03-08 00:53:41.659631 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:53:41.659635 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:53:41.659639 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:53:41.659643 | orchestrator | 2026-03-08 00:53:41.659646 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-03-08 00:53:41.659650 | orchestrator | Sunday 08 March 2026 00:52:03 +0000 (0:00:02.896) 0:04:53.902 ********** 2026-03-08 00:53:41.659654 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:53:41.659658 | orchestrator | 2026-03-08 00:53:41.659661 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-03-08 00:53:41.659665 | orchestrator | Sunday 08 March 2026 00:52:04 +0000 (0:00:01.471) 0:04:55.373 ********** 2026-03-08 00:53:41.659687 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-08 00:53:41.659693 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-08 00:53:41.659701 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-08 00:53:41.659706 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-08 00:53:41.659710 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-08 00:53:41.659723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-08 00:53:41.659731 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-08 00:53:41.659739 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-08 00:53:41.659744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-08 00:53:41.659749 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-08 00:53:41.659754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-08 00:53:41.659759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.659773 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.659783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-08 00:53:41.659792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.659796 | orchestrator | 2026-03-08 00:53:41.659803 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-03-08 00:53:41.659809 | orchestrator | Sunday 08 March 2026 00:52:08 +0000 (0:00:03.123) 0:04:58.497 ********** 2026-03-08 00:53:41.659816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-08 00:53:41.659823 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-08 00:53:41.659829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-08 00:53:41.659850 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-08 00:53:41.659861 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.659873 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.659880 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-08 00:53:41.659887 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-08 00:53:41.659894 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-08 00:53:41.659901 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-08 00:53:41.659922 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.659927 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.659936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-08 00:53:41.659945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-08 00:53:41.659949 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-08 00:53:41.659954 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-08 00:53:41.659958 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-08 00:53:41.659963 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.659967 | orchestrator | 2026-03-08 00:53:41.659971 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-03-08 00:53:41.659976 | orchestrator | Sunday 08 March 2026 00:52:08 +0000 (0:00:00.688) 0:04:59.185 ********** 2026-03-08 00:53:41.659980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-08 00:53:41.659985 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-08 00:53:41.659993 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.660008 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-08 00:53:41.660013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-08 00:53:41.660017 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.660022 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-08 00:53:41.660026 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-08 00:53:41.660031 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.660035 | orchestrator | 2026-03-08 00:53:41.660040 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-03-08 00:53:41.660044 | orchestrator | Sunday 08 March 2026 00:52:09 +0000 (0:00:01.225) 0:05:00.410 ********** 2026-03-08 00:53:41.660048 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:53:41.660053 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:53:41.660057 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:53:41.660061 | orchestrator | 2026-03-08 00:53:41.660066 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-03-08 00:53:41.660070 | orchestrator | Sunday 08 March 2026 00:52:11 +0000 (0:00:01.405) 0:05:01.815 ********** 2026-03-08 00:53:41.660076 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:53:41.660083 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:53:41.660089 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:53:41.660096 | orchestrator | 2026-03-08 00:53:41.660103 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-03-08 00:53:41.660111 | orchestrator | Sunday 08 March 2026 00:52:13 +0000 (0:00:02.197) 0:05:04.013 ********** 2026-03-08 00:53:41.660117 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:53:41.660125 | orchestrator | 2026-03-08 00:53:41.660131 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-03-08 00:53:41.660138 | orchestrator | Sunday 08 March 2026 00:52:14 +0000 (0:00:01.274) 0:05:05.287 ********** 2026-03-08 00:53:41.660146 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-08 00:53:41.660264 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-08 00:53:41.660326 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-08 00:53:41.660338 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-08 00:53:41.660344 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-08 00:53:41.660351 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-08 00:53:41.660362 | orchestrator | 2026-03-08 00:53:41.660368 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-03-08 00:53:41.660374 | orchestrator | Sunday 08 March 2026 00:52:20 +0000 (0:00:05.398) 0:05:10.686 ********** 2026-03-08 00:53:41.660396 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-08 00:53:41.660405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-08 00:53:41.660412 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.660418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-08 00:53:41.660424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-08 00:53:41.660434 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.660453 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-08 00:53:41.660463 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-08 00:53:41.660470 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.660475 | orchestrator | 2026-03-08 00:53:41.660481 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-03-08 00:53:41.660487 | orchestrator | Sunday 08 March 2026 00:52:21 +0000 (0:00:00.755) 0:05:11.442 ********** 2026-03-08 00:53:41.660491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-08 00:53:41.660496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-08 00:53:41.660501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-08 00:53:41.660505 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.660511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-08 00:53:41.660517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-08 00:53:41.660527 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-08 00:53:41.660533 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.660539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-08 00:53:41.660545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-08 00:53:41.660550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-08 00:53:41.660556 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.660562 | orchestrator | 2026-03-08 00:53:41.660568 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-03-08 00:53:41.660574 | orchestrator | Sunday 08 March 2026 00:52:22 +0000 (0:00:01.023) 0:05:12.465 ********** 2026-03-08 00:53:41.660579 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.660585 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.660591 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.660597 | orchestrator | 2026-03-08 00:53:41.660604 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-03-08 00:53:41.660610 | orchestrator | Sunday 08 March 2026 00:52:23 +0000 (0:00:01.013) 0:05:13.479 ********** 2026-03-08 00:53:41.660616 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.660640 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.660647 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.660653 | orchestrator | 2026-03-08 00:53:41.660659 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-03-08 00:53:41.660665 | orchestrator | Sunday 08 March 2026 00:52:24 +0000 (0:00:01.366) 0:05:14.845 ********** 2026-03-08 00:53:41.660671 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:53:41.660677 | orchestrator | 2026-03-08 00:53:41.660685 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-03-08 00:53:41.660689 | orchestrator | Sunday 08 March 2026 00:52:25 +0000 (0:00:01.534) 0:05:16.380 ********** 2026-03-08 00:53:41.660696 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-08 00:53:41.660701 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-08 00:53:41.660709 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-08 00:53:41.660714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:53:41.660718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-08 00:53:41.660722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:53:41.660738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:53:41.660746 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-08 00:53:41.660751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:53:41.660758 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-08 00:53:41.660762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-08 00:53:41.660766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-08 00:53:41.660770 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:53:41.660784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:53:41.660792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-08 00:53:41.660799 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-08 00:53:41.660810 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-08 00:53:41.660818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:53:41.660824 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:53:41.660835 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-08 00:53:41.660847 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-08 00:53:41.660856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-08 00:53:41.660870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:53:41.660874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:53:41.660878 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-08 00:53:41.660887 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-08 00:53:41.660893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-08 00:53:41.660903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:53:41.660908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:53:41.660912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-08 00:53:41.660915 | orchestrator | 2026-03-08 00:53:41.660919 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-03-08 00:53:41.660923 | orchestrator | Sunday 08 March 2026 00:52:30 +0000 (0:00:04.774) 0:05:21.154 ********** 2026-03-08 00:53:41.660927 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-08 00:53:41.660931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-08 00:53:41.660938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:53:41.660963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:53:41.660971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-08 00:53:41.660982 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-08 00:53:41.660987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-08 00:53:41.660990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:53:41.660997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:53:41.661004 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-08 00:53:41.661010 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.661014 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-08 00:53:41.661018 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-08 00:53:41.661023 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:53:41.661029 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:53:41.661036 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-08 00:53:41.661046 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-08 00:53:41.661060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-08 00:53:41.661068 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:53:41.661075 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-08 00:53:41.661082 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:53:41.661088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-08 00:53:41.661099 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-08 00:53:41.661104 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.661114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:53:41.661121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:53:41.661125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-08 00:53:41.661129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-08 00:53:41.661133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-08 00:53:41.661137 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:53:41.661146 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 00:53:41.661152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-08 00:53:41.661156 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.661160 | orchestrator | 2026-03-08 00:53:41.661165 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-03-08 00:53:41.661171 | orchestrator | Sunday 08 March 2026 00:52:32 +0000 (0:00:01.444) 0:05:22.599 ********** 2026-03-08 00:53:41.661192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-08 00:53:41.661200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-08 00:53:41.661207 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-08 00:53:41.661214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-08 00:53:41.661221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-08 00:53:41.661227 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.661233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-08 00:53:41.661240 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-08 00:53:41.661244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-08 00:53:41.661248 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.661251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-08 00:53:41.661255 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-08 00:53:41.661263 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-08 00:53:41.661270 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-08 00:53:41.661274 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.661278 | orchestrator | 2026-03-08 00:53:41.661281 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-03-08 00:53:41.661285 | orchestrator | Sunday 08 March 2026 00:52:33 +0000 (0:00:01.014) 0:05:23.613 ********** 2026-03-08 00:53:41.661289 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.661293 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.661296 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.661300 | orchestrator | 2026-03-08 00:53:41.661304 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-03-08 00:53:41.661307 | orchestrator | Sunday 08 March 2026 00:52:33 +0000 (0:00:00.482) 0:05:24.096 ********** 2026-03-08 00:53:41.661315 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.661319 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.661323 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.661326 | orchestrator | 2026-03-08 00:53:41.661330 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-03-08 00:53:41.661334 | orchestrator | Sunday 08 March 2026 00:52:35 +0000 (0:00:01.821) 0:05:25.917 ********** 2026-03-08 00:53:41.661337 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:53:41.661341 | orchestrator | 2026-03-08 00:53:41.661345 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-03-08 00:53:41.661348 | orchestrator | Sunday 08 March 2026 00:52:37 +0000 (0:00:01.971) 0:05:27.889 ********** 2026-03-08 00:53:41.661352 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-08 00:53:41.661357 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-08 00:53:41.661367 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-08 00:53:41.661371 | orchestrator | 2026-03-08 00:53:41.661375 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-03-08 00:53:41.661379 | orchestrator | Sunday 08 March 2026 00:52:40 +0000 (0:00:02.823) 0:05:30.713 ********** 2026-03-08 00:53:41.661385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-08 00:53:41.661389 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.661393 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-08 00:53:41.661397 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.661401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-08 00:53:41.661408 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.661412 | orchestrator | 2026-03-08 00:53:41.661416 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-03-08 00:53:41.661420 | orchestrator | Sunday 08 March 2026 00:52:40 +0000 (0:00:00.443) 0:05:31.156 ********** 2026-03-08 00:53:41.661424 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-08 00:53:41.661428 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.661432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-08 00:53:41.661435 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.661439 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-08 00:53:41.661443 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.661447 | orchestrator | 2026-03-08 00:53:41.661450 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-03-08 00:53:41.661456 | orchestrator | Sunday 08 March 2026 00:52:42 +0000 (0:00:01.272) 0:05:32.428 ********** 2026-03-08 00:53:41.661460 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.661465 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.661470 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.661476 | orchestrator | 2026-03-08 00:53:41.661481 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-03-08 00:53:41.661487 | orchestrator | Sunday 08 March 2026 00:52:42 +0000 (0:00:00.431) 0:05:32.860 ********** 2026-03-08 00:53:41.661493 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.661498 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.661504 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.661510 | orchestrator | 2026-03-08 00:53:41.661516 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-03-08 00:53:41.661521 | orchestrator | Sunday 08 March 2026 00:52:43 +0000 (0:00:01.361) 0:05:34.221 ********** 2026-03-08 00:53:41.661530 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:53:41.661537 | orchestrator | 2026-03-08 00:53:41.661542 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-03-08 00:53:41.661548 | orchestrator | Sunday 08 March 2026 00:52:45 +0000 (0:00:01.775) 0:05:35.997 ********** 2026-03-08 00:53:41.661554 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-08 00:53:41.661565 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-08 00:53:41.661571 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-08 00:53:41.661581 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-08 00:53:41.661590 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-08 00:53:41.661597 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-08 00:53:41.661608 | orchestrator | 2026-03-08 00:53:41.661614 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-03-08 00:53:41.661620 | orchestrator | Sunday 08 March 2026 00:52:52 +0000 (0:00:06.474) 0:05:42.471 ********** 2026-03-08 00:53:41.661626 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-08 00:53:41.661637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-08 00:53:41.661643 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.661653 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-08 00:53:41.661660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-08 00:53:41.661672 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.661677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-08 00:53:41.661681 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-08 00:53:41.661685 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.661688 | orchestrator | 2026-03-08 00:53:41.661694 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-03-08 00:53:41.661698 | orchestrator | Sunday 08 March 2026 00:52:52 +0000 (0:00:00.657) 0:05:43.129 ********** 2026-03-08 00:53:41.661702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-08 00:53:41.661706 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-08 00:53:41.661713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-08 00:53:41.661717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-08 00:53:41.661723 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.661727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-08 00:53:41.661731 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-08 00:53:41.661735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-08 00:53:41.661739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-08 00:53:41.661742 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.661746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-08 00:53:41.661750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-08 00:53:41.661754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-08 00:53:41.661758 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-08 00:53:41.661761 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.661765 | orchestrator | 2026-03-08 00:53:41.661769 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-03-08 00:53:41.661773 | orchestrator | Sunday 08 March 2026 00:52:54 +0000 (0:00:01.704) 0:05:44.834 ********** 2026-03-08 00:53:41.661776 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:53:41.661780 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:53:41.661784 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:53:41.661787 | orchestrator | 2026-03-08 00:53:41.661791 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-03-08 00:53:41.661795 | orchestrator | Sunday 08 March 2026 00:52:55 +0000 (0:00:01.484) 0:05:46.318 ********** 2026-03-08 00:53:41.661799 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:53:41.661802 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:53:41.661806 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:53:41.661809 | orchestrator | 2026-03-08 00:53:41.661813 | orchestrator | TASK [include_role : swift] **************************************************** 2026-03-08 00:53:41.661817 | orchestrator | Sunday 08 March 2026 00:52:58 +0000 (0:00:02.220) 0:05:48.538 ********** 2026-03-08 00:53:41.661821 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.661824 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.661828 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.661832 | orchestrator | 2026-03-08 00:53:41.661835 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-03-08 00:53:41.661839 | orchestrator | Sunday 08 March 2026 00:52:58 +0000 (0:00:00.348) 0:05:48.887 ********** 2026-03-08 00:53:41.661843 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.661846 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.661850 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.661857 | orchestrator | 2026-03-08 00:53:41.661863 | orchestrator | TASK [include_role : trove] **************************************************** 2026-03-08 00:53:41.661867 | orchestrator | Sunday 08 March 2026 00:52:58 +0000 (0:00:00.340) 0:05:49.227 ********** 2026-03-08 00:53:41.661871 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.661874 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.661878 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.661882 | orchestrator | 2026-03-08 00:53:41.661885 | orchestrator | TASK [include_role : venus] **************************************************** 2026-03-08 00:53:41.661889 | orchestrator | Sunday 08 March 2026 00:52:59 +0000 (0:00:00.683) 0:05:49.911 ********** 2026-03-08 00:53:41.661893 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.661897 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.661900 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.661904 | orchestrator | 2026-03-08 00:53:41.661908 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-03-08 00:53:41.661912 | orchestrator | Sunday 08 March 2026 00:52:59 +0000 (0:00:00.329) 0:05:50.240 ********** 2026-03-08 00:53:41.661917 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.661921 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.661925 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.661929 | orchestrator | 2026-03-08 00:53:41.661933 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-03-08 00:53:41.661936 | orchestrator | Sunday 08 March 2026 00:53:00 +0000 (0:00:00.314) 0:05:50.555 ********** 2026-03-08 00:53:41.661940 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.661944 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.661947 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.661951 | orchestrator | 2026-03-08 00:53:41.661955 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-03-08 00:53:41.661958 | orchestrator | Sunday 08 March 2026 00:53:01 +0000 (0:00:00.971) 0:05:51.527 ********** 2026-03-08 00:53:41.661962 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:53:41.661966 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:53:41.661970 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:53:41.661973 | orchestrator | 2026-03-08 00:53:41.661977 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-03-08 00:53:41.661981 | orchestrator | Sunday 08 March 2026 00:53:01 +0000 (0:00:00.752) 0:05:52.279 ********** 2026-03-08 00:53:41.661985 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:53:41.661988 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:53:41.661992 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:53:41.661996 | orchestrator | 2026-03-08 00:53:41.661999 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-03-08 00:53:41.662003 | orchestrator | Sunday 08 March 2026 00:53:02 +0000 (0:00:00.376) 0:05:52.656 ********** 2026-03-08 00:53:41.662007 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:53:41.662011 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:53:41.662040 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:53:41.662044 | orchestrator | 2026-03-08 00:53:41.662048 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-03-08 00:53:41.662051 | orchestrator | Sunday 08 March 2026 00:53:03 +0000 (0:00:00.966) 0:05:53.622 ********** 2026-03-08 00:53:41.662055 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:53:41.662059 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:53:41.662063 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:53:41.662066 | orchestrator | 2026-03-08 00:53:41.662070 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-03-08 00:53:41.662074 | orchestrator | Sunday 08 March 2026 00:53:04 +0000 (0:00:01.277) 0:05:54.899 ********** 2026-03-08 00:53:41.662077 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:53:41.662081 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:53:41.662085 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:53:41.662089 | orchestrator | 2026-03-08 00:53:41.662092 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-03-08 00:53:41.662100 | orchestrator | Sunday 08 March 2026 00:53:05 +0000 (0:00:01.094) 0:05:55.993 ********** 2026-03-08 00:53:41.662104 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:53:41.662108 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:53:41.662111 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:53:41.662115 | orchestrator | 2026-03-08 00:53:41.662119 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-03-08 00:53:41.662122 | orchestrator | Sunday 08 March 2026 00:53:10 +0000 (0:00:04.803) 0:06:00.797 ********** 2026-03-08 00:53:41.662126 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:53:41.662130 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:53:41.662134 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:53:41.662137 | orchestrator | 2026-03-08 00:53:41.662141 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-03-08 00:53:41.662145 | orchestrator | Sunday 08 March 2026 00:53:13 +0000 (0:00:02.775) 0:06:03.573 ********** 2026-03-08 00:53:41.662149 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:53:41.662152 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:53:41.662156 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:53:41.662160 | orchestrator | 2026-03-08 00:53:41.662164 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-03-08 00:53:41.662167 | orchestrator | Sunday 08 March 2026 00:53:23 +0000 (0:00:10.568) 0:06:14.141 ********** 2026-03-08 00:53:41.662171 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:53:41.662175 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:53:41.662191 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:53:41.662195 | orchestrator | 2026-03-08 00:53:41.662199 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-03-08 00:53:41.662203 | orchestrator | Sunday 08 March 2026 00:53:27 +0000 (0:00:04.089) 0:06:18.230 ********** 2026-03-08 00:53:41.662207 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:53:41.662210 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:53:41.662214 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:53:41.662218 | orchestrator | 2026-03-08 00:53:41.662221 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-03-08 00:53:41.662225 | orchestrator | Sunday 08 March 2026 00:53:31 +0000 (0:00:04.189) 0:06:22.420 ********** 2026-03-08 00:53:41.662229 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.662233 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.662236 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.662240 | orchestrator | 2026-03-08 00:53:41.662244 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-03-08 00:53:41.662251 | orchestrator | Sunday 08 March 2026 00:53:32 +0000 (0:00:00.353) 0:06:22.773 ********** 2026-03-08 00:53:41.662255 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.662259 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.662262 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.662266 | orchestrator | 2026-03-08 00:53:41.662270 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-03-08 00:53:41.662274 | orchestrator | Sunday 08 March 2026 00:53:32 +0000 (0:00:00.335) 0:06:23.108 ********** 2026-03-08 00:53:41.662278 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.662281 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.662285 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.662289 | orchestrator | 2026-03-08 00:53:41.662293 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-03-08 00:53:41.662296 | orchestrator | Sunday 08 March 2026 00:53:33 +0000 (0:00:00.710) 0:06:23.819 ********** 2026-03-08 00:53:41.662300 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.662304 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.662313 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.662316 | orchestrator | 2026-03-08 00:53:41.662320 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-03-08 00:53:41.662324 | orchestrator | Sunday 08 March 2026 00:53:33 +0000 (0:00:00.378) 0:06:24.198 ********** 2026-03-08 00:53:41.662331 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.662335 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.662338 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.662342 | orchestrator | 2026-03-08 00:53:41.662346 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-03-08 00:53:41.662350 | orchestrator | Sunday 08 March 2026 00:53:34 +0000 (0:00:00.379) 0:06:24.577 ********** 2026-03-08 00:53:41.662353 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:53:41.662357 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:53:41.662361 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:53:41.662365 | orchestrator | 2026-03-08 00:53:41.662368 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-03-08 00:53:41.662372 | orchestrator | Sunday 08 March 2026 00:53:34 +0000 (0:00:00.366) 0:06:24.943 ********** 2026-03-08 00:53:41.662376 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:53:41.662380 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:53:41.662383 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:53:41.662387 | orchestrator | 2026-03-08 00:53:41.662391 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-03-08 00:53:41.662395 | orchestrator | Sunday 08 March 2026 00:53:39 +0000 (0:00:05.252) 0:06:30.196 ********** 2026-03-08 00:53:41.662398 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:53:41.662402 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:53:41.662406 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:53:41.662410 | orchestrator | 2026-03-08 00:53:41.662413 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:53:41.662417 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-08 00:53:41.662421 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-08 00:53:41.662425 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-08 00:53:41.662429 | orchestrator | 2026-03-08 00:53:41.662433 | orchestrator | 2026-03-08 00:53:41.662436 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 00:53:41.662440 | orchestrator | Sunday 08 March 2026 00:53:40 +0000 (0:00:01.012) 0:06:31.209 ********** 2026-03-08 00:53:41.662444 | orchestrator | =============================================================================== 2026-03-08 00:53:41.662448 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 10.57s 2026-03-08 00:53:41.662452 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.47s 2026-03-08 00:53:41.662455 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 5.99s 2026-03-08 00:53:41.662459 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 5.88s 2026-03-08 00:53:41.662463 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 5.46s 2026-03-08 00:53:41.662467 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.40s 2026-03-08 00:53:41.662470 | orchestrator | loadbalancer : Wait for haproxy to listen on VIP ------------------------ 5.25s 2026-03-08 00:53:41.662474 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 4.97s 2026-03-08 00:53:41.662478 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 4.80s 2026-03-08 00:53:41.662482 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.77s 2026-03-08 00:53:41.662485 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.60s 2026-03-08 00:53:41.662489 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 4.45s 2026-03-08 00:53:41.662493 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 4.33s 2026-03-08 00:53:41.662500 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.27s 2026-03-08 00:53:41.662503 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 4.19s 2026-03-08 00:53:41.662507 | orchestrator | haproxy-config : Copying over grafana haproxy config -------------------- 4.10s 2026-03-08 00:53:41.662511 | orchestrator | loadbalancer : Wait for backup proxysql to start ------------------------ 4.09s 2026-03-08 00:53:41.662515 | orchestrator | loadbalancer : Copying over custom haproxy services configuration ------- 4.07s 2026-03-08 00:53:41.662520 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.00s 2026-03-08 00:53:41.662524 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 3.95s 2026-03-08 00:53:44.694313 | orchestrator | 2026-03-08 00:53:44 | INFO  | Task 65db8649-cd01-4f19-946a-a3ddfa88f72e is in state STARTED 2026-03-08 00:53:44.695099 | orchestrator | 2026-03-08 00:53:44 | INFO  | Task 42dada74-7697-47f6-b33d-6c63feba24e0 is in state STARTED 2026-03-08 00:53:44.696478 | orchestrator | 2026-03-08 00:53:44 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:53:44.696513 | orchestrator | 2026-03-08 00:53:44 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:53:47.744033 | orchestrator | 2026-03-08 00:53:47 | INFO  | Task 65db8649-cd01-4f19-946a-a3ddfa88f72e is in state STARTED 2026-03-08 00:53:47.745094 | orchestrator | 2026-03-08 00:53:47 | INFO  | Task 42dada74-7697-47f6-b33d-6c63feba24e0 is in state STARTED 2026-03-08 00:53:47.747153 | orchestrator | 2026-03-08 00:53:47 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:53:47.747474 | orchestrator | 2026-03-08 00:53:47 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:53:50.811282 | orchestrator | 2026-03-08 00:53:50 | INFO  | Task 65db8649-cd01-4f19-946a-a3ddfa88f72e is in state STARTED 2026-03-08 00:53:50.811915 | orchestrator | 2026-03-08 00:53:50 | INFO  | Task 42dada74-7697-47f6-b33d-6c63feba24e0 is in state STARTED 2026-03-08 00:53:50.812944 | orchestrator | 2026-03-08 00:53:50 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:53:50.813005 | orchestrator | 2026-03-08 00:53:50 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:53:53.843337 | orchestrator | 2026-03-08 00:53:53 | INFO  | Task 65db8649-cd01-4f19-946a-a3ddfa88f72e is in state STARTED 2026-03-08 00:53:53.843452 | orchestrator | 2026-03-08 00:53:53 | INFO  | Task 42dada74-7697-47f6-b33d-6c63feba24e0 is in state STARTED 2026-03-08 00:53:53.843472 | orchestrator | 2026-03-08 00:53:53 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:53:53.843488 | orchestrator | 2026-03-08 00:53:53 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:53:56.887484 | orchestrator | 2026-03-08 00:53:56 | INFO  | Task 65db8649-cd01-4f19-946a-a3ddfa88f72e is in state STARTED 2026-03-08 00:53:56.888561 | orchestrator | 2026-03-08 00:53:56 | INFO  | Task 42dada74-7697-47f6-b33d-6c63feba24e0 is in state STARTED 2026-03-08 00:53:56.891359 | orchestrator | 2026-03-08 00:53:56 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:53:56.891568 | orchestrator | 2026-03-08 00:53:56 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:53:59.935976 | orchestrator | 2026-03-08 00:53:59 | INFO  | Task 65db8649-cd01-4f19-946a-a3ddfa88f72e is in state STARTED 2026-03-08 00:53:59.937087 | orchestrator | 2026-03-08 00:53:59 | INFO  | Task 42dada74-7697-47f6-b33d-6c63feba24e0 is in state STARTED 2026-03-08 00:53:59.938002 | orchestrator | 2026-03-08 00:53:59 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:53:59.939267 | orchestrator | 2026-03-08 00:53:59 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:54:02.976515 | orchestrator | 2026-03-08 00:54:02 | INFO  | Task 65db8649-cd01-4f19-946a-a3ddfa88f72e is in state STARTED 2026-03-08 00:54:02.977657 | orchestrator | 2026-03-08 00:54:02 | INFO  | Task 42dada74-7697-47f6-b33d-6c63feba24e0 is in state STARTED 2026-03-08 00:54:02.979735 | orchestrator | 2026-03-08 00:54:02 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:54:02.979896 | orchestrator | 2026-03-08 00:54:02 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:54:06.033593 | orchestrator | 2026-03-08 00:54:06 | INFO  | Task 65db8649-cd01-4f19-946a-a3ddfa88f72e is in state STARTED 2026-03-08 00:54:06.034895 | orchestrator | 2026-03-08 00:54:06 | INFO  | Task 42dada74-7697-47f6-b33d-6c63feba24e0 is in state STARTED 2026-03-08 00:54:06.037088 | orchestrator | 2026-03-08 00:54:06 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:54:06.037205 | orchestrator | 2026-03-08 00:54:06 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:54:09.077509 | orchestrator | 2026-03-08 00:54:09 | INFO  | Task 65db8649-cd01-4f19-946a-a3ddfa88f72e is in state STARTED 2026-03-08 00:54:09.077580 | orchestrator | 2026-03-08 00:54:09 | INFO  | Task 42dada74-7697-47f6-b33d-6c63feba24e0 is in state STARTED 2026-03-08 00:54:09.077586 | orchestrator | 2026-03-08 00:54:09 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:54:09.077590 | orchestrator | 2026-03-08 00:54:09 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:54:12.122197 | orchestrator | 2026-03-08 00:54:12 | INFO  | Task 65db8649-cd01-4f19-946a-a3ddfa88f72e is in state STARTED 2026-03-08 00:54:12.122292 | orchestrator | 2026-03-08 00:54:12 | INFO  | Task 42dada74-7697-47f6-b33d-6c63feba24e0 is in state STARTED 2026-03-08 00:54:12.123576 | orchestrator | 2026-03-08 00:54:12 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:54:12.123636 | orchestrator | 2026-03-08 00:54:12 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:54:15.169132 | orchestrator | 2026-03-08 00:54:15 | INFO  | Task 65db8649-cd01-4f19-946a-a3ddfa88f72e is in state STARTED 2026-03-08 00:54:15.169466 | orchestrator | 2026-03-08 00:54:15 | INFO  | Task 42dada74-7697-47f6-b33d-6c63feba24e0 is in state STARTED 2026-03-08 00:54:15.170433 | orchestrator | 2026-03-08 00:54:15 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:54:15.172387 | orchestrator | 2026-03-08 00:54:15 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:54:18.218618 | orchestrator | 2026-03-08 00:54:18 | INFO  | Task 65db8649-cd01-4f19-946a-a3ddfa88f72e is in state STARTED 2026-03-08 00:54:18.218736 | orchestrator | 2026-03-08 00:54:18 | INFO  | Task 42dada74-7697-47f6-b33d-6c63feba24e0 is in state STARTED 2026-03-08 00:54:18.219665 | orchestrator | 2026-03-08 00:54:18 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:54:18.219687 | orchestrator | 2026-03-08 00:54:18 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:54:21.268874 | orchestrator | 2026-03-08 00:54:21 | INFO  | Task 65db8649-cd01-4f19-946a-a3ddfa88f72e is in state STARTED 2026-03-08 00:54:21.271538 | orchestrator | 2026-03-08 00:54:21 | INFO  | Task 42dada74-7697-47f6-b33d-6c63feba24e0 is in state STARTED 2026-03-08 00:54:21.272993 | orchestrator | 2026-03-08 00:54:21 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:54:21.273045 | orchestrator | 2026-03-08 00:54:21 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:54:24.317512 | orchestrator | 2026-03-08 00:54:24 | INFO  | Task 65db8649-cd01-4f19-946a-a3ddfa88f72e is in state STARTED 2026-03-08 00:54:24.319676 | orchestrator | 2026-03-08 00:54:24 | INFO  | Task 42dada74-7697-47f6-b33d-6c63feba24e0 is in state STARTED 2026-03-08 00:54:24.323655 | orchestrator | 2026-03-08 00:54:24 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:54:24.323706 | orchestrator | 2026-03-08 00:54:24 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:54:27.369474 | orchestrator | 2026-03-08 00:54:27 | INFO  | Task 65db8649-cd01-4f19-946a-a3ddfa88f72e is in state STARTED 2026-03-08 00:54:27.372130 | orchestrator | 2026-03-08 00:54:27 | INFO  | Task 42dada74-7697-47f6-b33d-6c63feba24e0 is in state STARTED 2026-03-08 00:54:27.373665 | orchestrator | 2026-03-08 00:54:27 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:54:27.373699 | orchestrator | 2026-03-08 00:54:27 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:54:30.420896 | orchestrator | 2026-03-08 00:54:30 | INFO  | Task 65db8649-cd01-4f19-946a-a3ddfa88f72e is in state STARTED 2026-03-08 00:54:30.423231 | orchestrator | 2026-03-08 00:54:30 | INFO  | Task 42dada74-7697-47f6-b33d-6c63feba24e0 is in state STARTED 2026-03-08 00:54:30.424787 | orchestrator | 2026-03-08 00:54:30 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:54:30.424846 | orchestrator | 2026-03-08 00:54:30 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:54:33.456581 | orchestrator | 2026-03-08 00:54:33 | INFO  | Task 65db8649-cd01-4f19-946a-a3ddfa88f72e is in state STARTED 2026-03-08 00:54:33.458224 | orchestrator | 2026-03-08 00:54:33 | INFO  | Task 42dada74-7697-47f6-b33d-6c63feba24e0 is in state STARTED 2026-03-08 00:54:33.460094 | orchestrator | 2026-03-08 00:54:33 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:54:33.460156 | orchestrator | 2026-03-08 00:54:33 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:54:36.509138 | orchestrator | 2026-03-08 00:54:36 | INFO  | Task 65db8649-cd01-4f19-946a-a3ddfa88f72e is in state STARTED 2026-03-08 00:54:36.515182 | orchestrator | 2026-03-08 00:54:36 | INFO  | Task 42dada74-7697-47f6-b33d-6c63feba24e0 is in state STARTED 2026-03-08 00:54:36.516465 | orchestrator | 2026-03-08 00:54:36 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:54:36.516755 | orchestrator | 2026-03-08 00:54:36 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:54:39.561643 | orchestrator | 2026-03-08 00:54:39 | INFO  | Task 65db8649-cd01-4f19-946a-a3ddfa88f72e is in state STARTED 2026-03-08 00:54:39.564660 | orchestrator | 2026-03-08 00:54:39 | INFO  | Task 42dada74-7697-47f6-b33d-6c63feba24e0 is in state STARTED 2026-03-08 00:54:39.566879 | orchestrator | 2026-03-08 00:54:39 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:54:39.566946 | orchestrator | 2026-03-08 00:54:39 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:54:42.611950 | orchestrator | 2026-03-08 00:54:42 | INFO  | Task 65db8649-cd01-4f19-946a-a3ddfa88f72e is in state STARTED 2026-03-08 00:54:42.614952 | orchestrator | 2026-03-08 00:54:42 | INFO  | Task 42dada74-7697-47f6-b33d-6c63feba24e0 is in state STARTED 2026-03-08 00:54:42.616203 | orchestrator | 2026-03-08 00:54:42 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:54:42.616683 | orchestrator | 2026-03-08 00:54:42 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:54:45.670571 | orchestrator | 2026-03-08 00:54:45 | INFO  | Task 65db8649-cd01-4f19-946a-a3ddfa88f72e is in state STARTED 2026-03-08 00:54:45.672515 | orchestrator | 2026-03-08 00:54:45 | INFO  | Task 42dada74-7697-47f6-b33d-6c63feba24e0 is in state STARTED 2026-03-08 00:54:45.675241 | orchestrator | 2026-03-08 00:54:45 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:54:45.676267 | orchestrator | 2026-03-08 00:54:45 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:54:48.721901 | orchestrator | 2026-03-08 00:54:48 | INFO  | Task 65db8649-cd01-4f19-946a-a3ddfa88f72e is in state STARTED 2026-03-08 00:54:48.723503 | orchestrator | 2026-03-08 00:54:48 | INFO  | Task 42dada74-7697-47f6-b33d-6c63feba24e0 is in state STARTED 2026-03-08 00:54:48.724960 | orchestrator | 2026-03-08 00:54:48 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:54:48.725002 | orchestrator | 2026-03-08 00:54:48 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:54:51.782106 | orchestrator | 2026-03-08 00:54:51 | INFO  | Task 65db8649-cd01-4f19-946a-a3ddfa88f72e is in state STARTED 2026-03-08 00:54:51.784278 | orchestrator | 2026-03-08 00:54:51 | INFO  | Task 42dada74-7697-47f6-b33d-6c63feba24e0 is in state STARTED 2026-03-08 00:54:51.786228 | orchestrator | 2026-03-08 00:54:51 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:54:51.786457 | orchestrator | 2026-03-08 00:54:51 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:54:54.836574 | orchestrator | 2026-03-08 00:54:54 | INFO  | Task 65db8649-cd01-4f19-946a-a3ddfa88f72e is in state STARTED 2026-03-08 00:54:54.839150 | orchestrator | 2026-03-08 00:54:54 | INFO  | Task 42dada74-7697-47f6-b33d-6c63feba24e0 is in state STARTED 2026-03-08 00:54:54.846211 | orchestrator | 2026-03-08 00:54:54 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:54:54.846272 | orchestrator | 2026-03-08 00:54:54 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:54:57.895548 | orchestrator | 2026-03-08 00:54:57 | INFO  | Task 65db8649-cd01-4f19-946a-a3ddfa88f72e is in state STARTED 2026-03-08 00:54:57.896051 | orchestrator | 2026-03-08 00:54:57 | INFO  | Task 42dada74-7697-47f6-b33d-6c63feba24e0 is in state STARTED 2026-03-08 00:54:57.898623 | orchestrator | 2026-03-08 00:54:57 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:54:57.898679 | orchestrator | 2026-03-08 00:54:57 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:55:00.946089 | orchestrator | 2026-03-08 00:55:00 | INFO  | Task 65db8649-cd01-4f19-946a-a3ddfa88f72e is in state STARTED 2026-03-08 00:55:00.947434 | orchestrator | 2026-03-08 00:55:00 | INFO  | Task 42dada74-7697-47f6-b33d-6c63feba24e0 is in state STARTED 2026-03-08 00:55:00.948651 | orchestrator | 2026-03-08 00:55:00 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:55:00.948856 | orchestrator | 2026-03-08 00:55:00 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:55:04.007468 | orchestrator | 2026-03-08 00:55:04 | INFO  | Task 65db8649-cd01-4f19-946a-a3ddfa88f72e is in state STARTED 2026-03-08 00:55:04.008275 | orchestrator | 2026-03-08 00:55:04 | INFO  | Task 42dada74-7697-47f6-b33d-6c63feba24e0 is in state STARTED 2026-03-08 00:55:04.008449 | orchestrator | 2026-03-08 00:55:04 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:55:04.008469 | orchestrator | 2026-03-08 00:55:04 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:55:07.060105 | orchestrator | 2026-03-08 00:55:07 | INFO  | Task 65db8649-cd01-4f19-946a-a3ddfa88f72e is in state STARTED 2026-03-08 00:55:07.061870 | orchestrator | 2026-03-08 00:55:07 | INFO  | Task 42dada74-7697-47f6-b33d-6c63feba24e0 is in state STARTED 2026-03-08 00:55:07.063785 | orchestrator | 2026-03-08 00:55:07 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:55:07.063931 | orchestrator | 2026-03-08 00:55:07 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:55:10.117383 | orchestrator | 2026-03-08 00:55:10 | INFO  | Task 65db8649-cd01-4f19-946a-a3ddfa88f72e is in state STARTED 2026-03-08 00:55:10.118890 | orchestrator | 2026-03-08 00:55:10 | INFO  | Task 42dada74-7697-47f6-b33d-6c63feba24e0 is in state STARTED 2026-03-08 00:55:10.120800 | orchestrator | 2026-03-08 00:55:10 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:55:10.120860 | orchestrator | 2026-03-08 00:55:10 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:55:13.173636 | orchestrator | 2026-03-08 00:55:13 | INFO  | Task 65db8649-cd01-4f19-946a-a3ddfa88f72e is in state STARTED 2026-03-08 00:55:13.175245 | orchestrator | 2026-03-08 00:55:13 | INFO  | Task 42dada74-7697-47f6-b33d-6c63feba24e0 is in state STARTED 2026-03-08 00:55:13.177237 | orchestrator | 2026-03-08 00:55:13 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:55:13.178716 | orchestrator | 2026-03-08 00:55:13 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:55:16.237285 | orchestrator | 2026-03-08 00:55:16 | INFO  | Task 65db8649-cd01-4f19-946a-a3ddfa88f72e is in state STARTED 2026-03-08 00:55:16.238858 | orchestrator | 2026-03-08 00:55:16 | INFO  | Task 42dada74-7697-47f6-b33d-6c63feba24e0 is in state STARTED 2026-03-08 00:55:16.240868 | orchestrator | 2026-03-08 00:55:16 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:55:16.240988 | orchestrator | 2026-03-08 00:55:16 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:55:19.289360 | orchestrator | 2026-03-08 00:55:19 | INFO  | Task 65db8649-cd01-4f19-946a-a3ddfa88f72e is in state STARTED 2026-03-08 00:55:19.293236 | orchestrator | 2026-03-08 00:55:19 | INFO  | Task 42dada74-7697-47f6-b33d-6c63feba24e0 is in state STARTED 2026-03-08 00:55:19.295321 | orchestrator | 2026-03-08 00:55:19 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:55:19.295403 | orchestrator | 2026-03-08 00:55:19 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:55:22.349989 | orchestrator | 2026-03-08 00:55:22 | INFO  | Task 65db8649-cd01-4f19-946a-a3ddfa88f72e is in state STARTED 2026-03-08 00:55:22.351114 | orchestrator | 2026-03-08 00:55:22 | INFO  | Task 42dada74-7697-47f6-b33d-6c63feba24e0 is in state STARTED 2026-03-08 00:55:22.353046 | orchestrator | 2026-03-08 00:55:22 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:55:22.353075 | orchestrator | 2026-03-08 00:55:22 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:55:25.398197 | orchestrator | 2026-03-08 00:55:25 | INFO  | Task 65db8649-cd01-4f19-946a-a3ddfa88f72e is in state STARTED 2026-03-08 00:55:25.399849 | orchestrator | 2026-03-08 00:55:25 | INFO  | Task 42dada74-7697-47f6-b33d-6c63feba24e0 is in state STARTED 2026-03-08 00:55:25.400809 | orchestrator | 2026-03-08 00:55:25 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:55:25.400834 | orchestrator | 2026-03-08 00:55:25 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:55:28.440196 | orchestrator | 2026-03-08 00:55:28 | INFO  | Task 65db8649-cd01-4f19-946a-a3ddfa88f72e is in state STARTED 2026-03-08 00:55:28.440497 | orchestrator | 2026-03-08 00:55:28 | INFO  | Task 42dada74-7697-47f6-b33d-6c63feba24e0 is in state STARTED 2026-03-08 00:55:28.442487 | orchestrator | 2026-03-08 00:55:28 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:55:28.442625 | orchestrator | 2026-03-08 00:55:28 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:55:31.473825 | orchestrator | 2026-03-08 00:55:31 | INFO  | Task 65db8649-cd01-4f19-946a-a3ddfa88f72e is in state STARTED 2026-03-08 00:55:31.474820 | orchestrator | 2026-03-08 00:55:31 | INFO  | Task 42dada74-7697-47f6-b33d-6c63feba24e0 is in state STARTED 2026-03-08 00:55:31.475880 | orchestrator | 2026-03-08 00:55:31 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:55:31.475939 | orchestrator | 2026-03-08 00:55:31 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:55:34.531982 | orchestrator | 2026-03-08 00:55:34 | INFO  | Task 65db8649-cd01-4f19-946a-a3ddfa88f72e is in state STARTED 2026-03-08 00:55:34.533955 | orchestrator | 2026-03-08 00:55:34 | INFO  | Task 42dada74-7697-47f6-b33d-6c63feba24e0 is in state STARTED 2026-03-08 00:55:34.535900 | orchestrator | 2026-03-08 00:55:34 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:55:34.535965 | orchestrator | 2026-03-08 00:55:34 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:55:37.589966 | orchestrator | 2026-03-08 00:55:37 | INFO  | Task 65db8649-cd01-4f19-946a-a3ddfa88f72e is in state STARTED 2026-03-08 00:55:37.591383 | orchestrator | 2026-03-08 00:55:37 | INFO  | Task 42dada74-7697-47f6-b33d-6c63feba24e0 is in state STARTED 2026-03-08 00:55:37.593299 | orchestrator | 2026-03-08 00:55:37 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:55:37.593342 | orchestrator | 2026-03-08 00:55:37 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:55:40.642992 | orchestrator | 2026-03-08 00:55:40 | INFO  | Task 65db8649-cd01-4f19-946a-a3ddfa88f72e is in state STARTED 2026-03-08 00:55:40.645177 | orchestrator | 2026-03-08 00:55:40 | INFO  | Task 42dada74-7697-47f6-b33d-6c63feba24e0 is in state STARTED 2026-03-08 00:55:40.646745 | orchestrator | 2026-03-08 00:55:40 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:55:40.646784 | orchestrator | 2026-03-08 00:55:40 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:55:43.694344 | orchestrator | 2026-03-08 00:55:43 | INFO  | Task 65db8649-cd01-4f19-946a-a3ddfa88f72e is in state STARTED 2026-03-08 00:55:43.697213 | orchestrator | 2026-03-08 00:55:43 | INFO  | Task 42dada74-7697-47f6-b33d-6c63feba24e0 is in state STARTED 2026-03-08 00:55:43.697278 | orchestrator | 2026-03-08 00:55:43 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:55:43.697292 | orchestrator | 2026-03-08 00:55:43 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:55:46.743778 | orchestrator | 2026-03-08 00:55:46 | INFO  | Task 65db8649-cd01-4f19-946a-a3ddfa88f72e is in state STARTED 2026-03-08 00:55:46.744644 | orchestrator | 2026-03-08 00:55:46 | INFO  | Task 42dada74-7697-47f6-b33d-6c63feba24e0 is in state STARTED 2026-03-08 00:55:46.746672 | orchestrator | 2026-03-08 00:55:46 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:55:46.748308 | orchestrator | 2026-03-08 00:55:46 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:55:49.811331 | orchestrator | 2026-03-08 00:55:49 | INFO  | Task 65db8649-cd01-4f19-946a-a3ddfa88f72e is in state STARTED 2026-03-08 00:55:49.812124 | orchestrator | 2026-03-08 00:55:49 | INFO  | Task 42dada74-7697-47f6-b33d-6c63feba24e0 is in state STARTED 2026-03-08 00:55:49.813748 | orchestrator | 2026-03-08 00:55:49 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state STARTED 2026-03-08 00:55:49.813809 | orchestrator | 2026-03-08 00:55:49 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:55:52.872395 | orchestrator | 2026-03-08 00:55:52 | INFO  | Task 65db8649-cd01-4f19-946a-a3ddfa88f72e is in state STARTED 2026-03-08 00:55:52.873107 | orchestrator | 2026-03-08 00:55:52 | INFO  | Task 42dada74-7697-47f6-b33d-6c63feba24e0 is in state STARTED 2026-03-08 00:55:52.883802 | orchestrator | 2026-03-08 00:55:52 | INFO  | Task 409ff042-903c-48ed-95a9-5cf0135e1e2e is in state SUCCESS 2026-03-08 00:55:52.885450 | orchestrator | 2026-03-08 00:55:52.885529 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-08 00:55:52.885539 | orchestrator | 2.16.14 2026-03-08 00:55:52.885547 | orchestrator | 2026-03-08 00:55:52.885554 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-03-08 00:55:52.885560 | orchestrator | 2026-03-08 00:55:52.885566 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-08 00:55:52.885572 | orchestrator | Sunday 08 March 2026 00:44:46 +0000 (0:00:00.731) 0:00:00.731 ********** 2026-03-08 00:55:52.885580 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:55:52.885587 | orchestrator | 2026-03-08 00:55:52.885593 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-08 00:55:52.885599 | orchestrator | Sunday 08 March 2026 00:44:47 +0000 (0:00:00.966) 0:00:01.697 ********** 2026-03-08 00:55:52.885605 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:55:52.885612 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:55:52.885618 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:55:52.885624 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:55:52.885630 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:55:52.885636 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:55:52.885642 | orchestrator | 2026-03-08 00:55:52.885648 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-08 00:55:52.885654 | orchestrator | Sunday 08 March 2026 00:44:49 +0000 (0:00:01.345) 0:00:03.043 ********** 2026-03-08 00:55:52.885660 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:55:52.885666 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:55:52.885672 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:55:52.885678 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:55:52.885683 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:55:52.885689 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:55:52.885695 | orchestrator | 2026-03-08 00:55:52.885701 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-08 00:55:52.885706 | orchestrator | Sunday 08 March 2026 00:44:50 +0000 (0:00:00.849) 0:00:03.893 ********** 2026-03-08 00:55:52.885712 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:55:52.885718 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:55:52.885723 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:55:52.885730 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:55:52.885736 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:55:52.885757 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:55:52.885763 | orchestrator | 2026-03-08 00:55:52.885769 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-08 00:55:52.885776 | orchestrator | Sunday 08 March 2026 00:44:51 +0000 (0:00:01.080) 0:00:04.974 ********** 2026-03-08 00:55:52.885782 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:55:52.885788 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:55:52.885794 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:55:52.885800 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:55:52.885806 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:55:52.885813 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:55:52.885819 | orchestrator | 2026-03-08 00:55:52.885825 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-08 00:55:52.885832 | orchestrator | Sunday 08 March 2026 00:44:51 +0000 (0:00:00.683) 0:00:05.657 ********** 2026-03-08 00:55:52.885928 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:55:52.885939 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:55:52.885945 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:55:52.885951 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:55:52.885987 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:55:52.885993 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:55:52.885997 | orchestrator | 2026-03-08 00:55:52.886001 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-08 00:55:52.886005 | orchestrator | Sunday 08 March 2026 00:44:52 +0000 (0:00:00.543) 0:00:06.200 ********** 2026-03-08 00:55:52.886351 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:55:52.886363 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:55:52.886368 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:55:52.886372 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:55:52.886376 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:55:52.886381 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:55:52.886385 | orchestrator | 2026-03-08 00:55:52.886390 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-08 00:55:52.886395 | orchestrator | Sunday 08 March 2026 00:44:53 +0000 (0:00:00.662) 0:00:06.863 ********** 2026-03-08 00:55:52.886400 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.886405 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.886409 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.886414 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.886418 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.886423 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.886427 | orchestrator | 2026-03-08 00:55:52.886431 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-08 00:55:52.886435 | orchestrator | Sunday 08 March 2026 00:44:53 +0000 (0:00:00.588) 0:00:07.451 ********** 2026-03-08 00:55:52.886439 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:55:52.886443 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:55:52.886447 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:55:52.886451 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:55:52.886454 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:55:52.886458 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:55:52.886462 | orchestrator | 2026-03-08 00:55:52.886466 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-08 00:55:52.886470 | orchestrator | Sunday 08 March 2026 00:44:54 +0000 (0:00:00.788) 0:00:08.239 ********** 2026-03-08 00:55:52.886473 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-08 00:55:52.886477 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-08 00:55:52.886481 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-08 00:55:52.886485 | orchestrator | 2026-03-08 00:55:52.886489 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-08 00:55:52.886492 | orchestrator | Sunday 08 March 2026 00:44:55 +0000 (0:00:00.756) 0:00:08.995 ********** 2026-03-08 00:55:52.886496 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:55:52.886500 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:55:52.886503 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:55:52.886520 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:55:52.886524 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:55:52.886527 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:55:52.886531 | orchestrator | 2026-03-08 00:55:52.886535 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-08 00:55:52.886539 | orchestrator | Sunday 08 March 2026 00:44:56 +0000 (0:00:01.484) 0:00:10.480 ********** 2026-03-08 00:55:52.886543 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-08 00:55:52.886546 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-08 00:55:52.886550 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-08 00:55:52.886562 | orchestrator | 2026-03-08 00:55:52.886566 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-08 00:55:52.886570 | orchestrator | Sunday 08 March 2026 00:44:59 +0000 (0:00:02.677) 0:00:13.157 ********** 2026-03-08 00:55:52.886574 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-08 00:55:52.886578 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-08 00:55:52.886581 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-08 00:55:52.886585 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.886589 | orchestrator | 2026-03-08 00:55:52.886593 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-08 00:55:52.886596 | orchestrator | Sunday 08 March 2026 00:45:00 +0000 (0:00:00.911) 0:00:14.069 ********** 2026-03-08 00:55:52.886602 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-08 00:55:52.886615 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-08 00:55:52.886619 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-08 00:55:52.886623 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.886626 | orchestrator | 2026-03-08 00:55:52.886630 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-08 00:55:52.886634 | orchestrator | Sunday 08 March 2026 00:45:01 +0000 (0:00:00.869) 0:00:14.939 ********** 2026-03-08 00:55:52.886640 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-08 00:55:52.886646 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-08 00:55:52.886650 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-08 00:55:52.886654 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.886658 | orchestrator | 2026-03-08 00:55:52.886662 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-08 00:55:52.886666 | orchestrator | Sunday 08 March 2026 00:45:01 +0000 (0:00:00.374) 0:00:15.314 ********** 2026-03-08 00:55:52.886677 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-08 00:44:57.870391', 'end': '2026-03-08 00:44:57.991188', 'delta': '0:00:00.120797', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-08 00:55:52.886694 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-08 00:44:58.577474', 'end': '2026-03-08 00:44:58.708948', 'delta': '0:00:00.131474', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-08 00:55:52.886700 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-08 00:44:59.165437', 'end': '2026-03-08 00:44:59.267237', 'delta': '0:00:00.101800', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-08 00:55:52.886704 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.886708 | orchestrator | 2026-03-08 00:55:52.886712 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-08 00:55:52.886716 | orchestrator | Sunday 08 March 2026 00:45:01 +0000 (0:00:00.367) 0:00:15.682 ********** 2026-03-08 00:55:52.886720 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:55:52.886723 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:55:52.886727 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:55:52.886731 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:55:52.886735 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:55:52.886738 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:55:52.886742 | orchestrator | 2026-03-08 00:55:52.886746 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-08 00:55:52.886750 | orchestrator | Sunday 08 March 2026 00:45:03 +0000 (0:00:01.932) 0:00:17.614 ********** 2026-03-08 00:55:52.886754 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-08 00:55:52.886757 | orchestrator | 2026-03-08 00:55:52.886761 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-08 00:55:52.886765 | orchestrator | Sunday 08 March 2026 00:45:04 +0000 (0:00:00.683) 0:00:18.297 ********** 2026-03-08 00:55:52.886814 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.886818 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.886822 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.886826 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.886830 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.886834 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.886840 | orchestrator | 2026-03-08 00:55:52.886846 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-08 00:55:52.886852 | orchestrator | Sunday 08 March 2026 00:45:06 +0000 (0:00:01.615) 0:00:19.913 ********** 2026-03-08 00:55:52.886943 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.886953 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.886958 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.886964 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.886977 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.886983 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.886989 | orchestrator | 2026-03-08 00:55:52.886995 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-08 00:55:52.886999 | orchestrator | Sunday 08 March 2026 00:45:08 +0000 (0:00:02.066) 0:00:21.979 ********** 2026-03-08 00:55:52.887003 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.887007 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.887011 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.887014 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.887018 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.887022 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.887025 | orchestrator | 2026-03-08 00:55:52.887029 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-08 00:55:52.887033 | orchestrator | Sunday 08 March 2026 00:45:10 +0000 (0:00:02.662) 0:00:24.641 ********** 2026-03-08 00:55:52.887599 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.887615 | orchestrator | 2026-03-08 00:55:52.887621 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-08 00:55:52.887627 | orchestrator | Sunday 08 March 2026 00:45:11 +0000 (0:00:00.167) 0:00:24.809 ********** 2026-03-08 00:55:52.887633 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.887639 | orchestrator | 2026-03-08 00:55:52.887644 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-08 00:55:52.887649 | orchestrator | Sunday 08 March 2026 00:45:11 +0000 (0:00:00.249) 0:00:25.058 ********** 2026-03-08 00:55:52.887656 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.887693 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.887699 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.887763 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.887771 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.887775 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.887779 | orchestrator | 2026-03-08 00:55:52.887783 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-08 00:55:52.887787 | orchestrator | Sunday 08 March 2026 00:45:12 +0000 (0:00:00.809) 0:00:25.868 ********** 2026-03-08 00:55:52.887791 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.887794 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.887798 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.887802 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.887806 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.887810 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.887813 | orchestrator | 2026-03-08 00:55:52.887817 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-08 00:55:52.887821 | orchestrator | Sunday 08 March 2026 00:45:13 +0000 (0:00:00.892) 0:00:26.760 ********** 2026-03-08 00:55:52.887825 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.887829 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.887832 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.887836 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.887840 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.887844 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.887847 | orchestrator | 2026-03-08 00:55:52.887851 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-08 00:55:52.887855 | orchestrator | Sunday 08 March 2026 00:45:13 +0000 (0:00:00.751) 0:00:27.512 ********** 2026-03-08 00:55:52.887859 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.887882 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.887888 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.887892 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.887896 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.887899 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.887903 | orchestrator | 2026-03-08 00:55:52.887934 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-08 00:55:52.887948 | orchestrator | Sunday 08 March 2026 00:45:14 +0000 (0:00:00.831) 0:00:28.343 ********** 2026-03-08 00:55:52.887952 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.887956 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.887960 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.887969 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.887973 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.887977 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.887980 | orchestrator | 2026-03-08 00:55:52.887984 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-08 00:55:52.887988 | orchestrator | Sunday 08 March 2026 00:45:15 +0000 (0:00:00.557) 0:00:28.901 ********** 2026-03-08 00:55:52.887992 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.887995 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.887999 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.888003 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.888007 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.888010 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.888014 | orchestrator | 2026-03-08 00:55:52.888018 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-08 00:55:52.888022 | orchestrator | Sunday 08 March 2026 00:45:15 +0000 (0:00:00.785) 0:00:29.687 ********** 2026-03-08 00:55:52.888026 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.888030 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.888034 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.888037 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.888041 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.888045 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.888049 | orchestrator | 2026-03-08 00:55:52.888052 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-08 00:55:52.888056 | orchestrator | Sunday 08 March 2026 00:45:17 +0000 (0:00:01.216) 0:00:30.904 ********** 2026-03-08 00:55:52.888091 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--fb6eff58--5334--5828--9091--c0c39e64aeb1-osd--block--fb6eff58--5334--5828--9091--c0c39e64aeb1', 'dm-uuid-LVM-i9Xp5FUImtPtfN54C9ErRcykIZxaciZ8LXUwAGaSEtefK9rOU9kaKk7rZR7ptQZ6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-08 00:55:52.888096 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e3bef375--74a7--543b--9618--1787c99aecbb-osd--block--e3bef375--74a7--543b--9618--1787c99aecbb', 'dm-uuid-LVM-lHTKlioALzvrdCWxIUOY32laezYa9plhCTJmFyMIYqzt4GULUEK4IqtgTGpoAbH2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-08 00:55:52.888143 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:55:52.888155 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:55:52.888169 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:55:52.888176 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:55:52.888188 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:55:52.888193 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:55:52.888199 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:55:52.888205 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:55:52.888235 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c560df89-ac9f-43eb-b629-a1334440ff2f', 'scsi-SQEMU_QEMU_HARDDISK_c560df89-ac9f-43eb-b629-a1334440ff2f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c560df89-ac9f-43eb-b629-a1334440ff2f-part1', 'scsi-SQEMU_QEMU_HARDDISK_c560df89-ac9f-43eb-b629-a1334440ff2f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c560df89-ac9f-43eb-b629-a1334440ff2f-part14', 'scsi-SQEMU_QEMU_HARDDISK_c560df89-ac9f-43eb-b629-a1334440ff2f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c560df89-ac9f-43eb-b629-a1334440ff2f-part15', 'scsi-SQEMU_QEMU_HARDDISK_c560df89-ac9f-43eb-b629-a1334440ff2f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c560df89-ac9f-43eb-b629-a1334440ff2f-part16', 'scsi-SQEMU_QEMU_HARDDISK_c560df89-ac9f-43eb-b629-a1334440ff2f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-08 00:55:52.888375 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--fb6eff58--5334--5828--9091--c0c39e64aeb1-osd--block--fb6eff58--5334--5828--9091--c0c39e64aeb1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-CgeyGY-4o5N-jaLE-Ybsd-Xi8d-yVB4-37QTGL', 'scsi-0QEMU_QEMU_HARDDISK_d9cf7a23-7f28-4003-9453-869e07fd4fea', 'scsi-SQEMU_QEMU_HARDDISK_d9cf7a23-7f28-4003-9453-869e07fd4fea'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-08 00:55:52.888403 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--e3bef375--74a7--543b--9618--1787c99aecbb-osd--block--e3bef375--74a7--543b--9618--1787c99aecbb'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-PF0bub-Ex82-boiQ-txFA-GEv1-V0IY-tU6VIs', 'scsi-0QEMU_QEMU_HARDDISK_26ccb454-a8ab-488a-9282-a29bd19f440f', 'scsi-SQEMU_QEMU_HARDDISK_26ccb454-a8ab-488a-9282-a29bd19f440f'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-08 00:55:52.888411 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f69177ca-c9b7-4ecf-919e-98158e504d7d', 'scsi-SQEMU_QEMU_HARDDISK_f69177ca-c9b7-4ecf-919e-98158e504d7d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-08 00:55:52.888419 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-08-00-02-41-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-08 00:55:52.888455 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e9614fc2--8329--596c--937c--60ceb39d5fd3-osd--block--e9614fc2--8329--596c--937c--60ceb39d5fd3', 'dm-uuid-LVM-A6sX8tBZd3f7ouAe7LbLRKt8yUuKL0IDAxAZcluQUQudt0215DlOFuVxUcuxbYVY'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-08 00:55:52.888472 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--eb569be8--41bf--5aa1--acb9--f145abad3137-osd--block--eb569be8--41bf--5aa1--acb9--f145abad3137', 'dm-uuid-LVM-sKJiMMw0cExulSsyIHg8glBLvDfU3ZtqvP3kpDXrQBSsu6FbQiwuhHaTocE12knM'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-08 00:55:52.888479 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:55:52.888485 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.888498 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:55:52.888505 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:55:52.888512 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:55:52.888518 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:55:52.888525 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:55:52.888532 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:55:52.888653 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5bde4b8d--c924--5d1f--8c9a--71f523250ead-osd--block--5bde4b8d--c924--5d1f--8c9a--71f523250ead', 'dm-uuid-LVM-nHFytykV0Xq8u8fjA5hGQa4Cn7XhkTNmvkeLvLgPXHeoyLboVG1ltbWGS54dxNZ6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-08 00:55:52.888667 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ad275011--1eda--59d8--b818--a96e3c140717-osd--block--ad275011--1eda--59d8--b818--a96e3c140717', 'dm-uuid-LVM-52Zq5ucCtcvbGnpmAUTA1jJlUb8YWeRpVDHZgB300qIhha9jZhACuwUx3qWK1rRI'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-08 00:55:52.888678 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:55:52.889142 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:55:52.889164 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_544edfd2-ddc4-4596-85df-1c9b9e7c3b59', 'scsi-SQEMU_QEMU_HARDDISK_544edfd2-ddc4-4596-85df-1c9b9e7c3b59'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_544edfd2-ddc4-4596-85df-1c9b9e7c3b59-part1', 'scsi-SQEMU_QEMU_HARDDISK_544edfd2-ddc4-4596-85df-1c9b9e7c3b59-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_544edfd2-ddc4-4596-85df-1c9b9e7c3b59-part14', 'scsi-SQEMU_QEMU_HARDDISK_544edfd2-ddc4-4596-85df-1c9b9e7c3b59-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_544edfd2-ddc4-4596-85df-1c9b9e7c3b59-part15', 'scsi-SQEMU_QEMU_HARDDISK_544edfd2-ddc4-4596-85df-1c9b9e7c3b59-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_544edfd2-ddc4-4596-85df-1c9b9e7c3b59-part16', 'scsi-SQEMU_QEMU_HARDDISK_544edfd2-ddc4-4596-85df-1c9b9e7c3b59-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-08 00:55:52.889204 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:55:52.889212 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--e9614fc2--8329--596c--937c--60ceb39d5fd3-osd--block--e9614fc2--8329--596c--937c--60ceb39d5fd3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-HrRq55-4gpm-0vnp-o3sj-TvyH-5XAh-qNEgG1', 'scsi-0QEMU_QEMU_HARDDISK_581ffd65-22a4-4ef2-934b-fe47abf1be5c', 'scsi-SQEMU_QEMU_HARDDISK_581ffd65-22a4-4ef2-934b-fe47abf1be5c'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-08 00:55:52.889225 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:55:52.889231 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:55:52.889238 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--eb569be8--41bf--5aa1--acb9--f145abad3137-osd--block--eb569be8--41bf--5aa1--acb9--f145abad3137'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-AUddYJ-aAA1-Mgqt-B2eI-RKBS-JglY-blMXBN', 'scsi-0QEMU_QEMU_HARDDISK_2f73f377-a3b9-4553-a6d0-e21973e3a5e5', 'scsi-SQEMU_QEMU_HARDDISK_2f73f377-a3b9-4553-a6d0-e21973e3a5e5'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-08 00:55:52.889244 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:55:52.889251 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d4cf331-77e8-4e4e-b490-10f0636e01e9', 'scsi-SQEMU_QEMU_HARDDISK_1d4cf331-77e8-4e4e-b490-10f0636e01e9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-08 00:55:52.889279 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:55:52.889286 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-08-00-02-42-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-08 00:55:52.889293 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:55:52.889304 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:55:52.889402 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1404ed60-298a-412c-bd4f-1e90f35345d3', 'scsi-SQEMU_QEMU_HARDDISK_1404ed60-298a-412c-bd4f-1e90f35345d3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1404ed60-298a-412c-bd4f-1e90f35345d3-part1', 'scsi-SQEMU_QEMU_HARDDISK_1404ed60-298a-412c-bd4f-1e90f35345d3-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1404ed60-298a-412c-bd4f-1e90f35345d3-part14', 'scsi-SQEMU_QEMU_HARDDISK_1404ed60-298a-412c-bd4f-1e90f35345d3-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1404ed60-298a-412c-bd4f-1e90f35345d3-part15', 'scsi-SQEMU_QEMU_HARDDISK_1404ed60-298a-412c-bd4f-1e90f35345d3-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1404ed60-298a-412c-bd4f-1e90f35345d3-part16', 'scsi-SQEMU_QEMU_HARDDISK_1404ed60-298a-412c-bd4f-1e90f35345d3-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-08 00:55:52.889432 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--5bde4b8d--c924--5d1f--8c9a--71f523250ead-osd--block--5bde4b8d--c924--5d1f--8c9a--71f523250ead'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-M4Otux-6dey-Ma9v-e8Ja-5EGJ-G046-HaA2BM', 'scsi-0QEMU_QEMU_HARDDISK_a9abd44a-efa3-4fc9-810c-e4cec7375a49', 'scsi-SQEMU_QEMU_HARDDISK_a9abd44a-efa3-4fc9-810c-e4cec7375a49'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-08 00:55:52.889439 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.889446 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--ad275011--1eda--59d8--b818--a96e3c140717-osd--block--ad275011--1eda--59d8--b818--a96e3c140717'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-7Xg0Dw-SHI6-t4Km-ifTF-zLGd-Zegk-01LUBG', 'scsi-0QEMU_QEMU_HARDDISK_70953687-69fa-4056-8e35-7089ee1c64ea', 'scsi-SQEMU_QEMU_HARDDISK_70953687-69fa-4056-8e35-7089ee1c64ea'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-08 00:55:52.889457 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7bc88367-6aaf-4ded-8fa4-f9240096c464', 'scsi-SQEMU_QEMU_HARDDISK_7bc88367-6aaf-4ded-8fa4-f9240096c464'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-08 00:55:52.889464 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-08-00-02-50-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-08 00:55:52.889471 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:55:52.889477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:55:52.889488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:55:52.889495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:55:52.889515 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:55:52.889522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:55:52.889528 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:55:52.889538 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:55:52.889546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20d183f1-445d-49e2-ba1a-793a8137c84b', 'scsi-SQEMU_QEMU_HARDDISK_20d183f1-445d-49e2-ba1a-793a8137c84b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20d183f1-445d-49e2-ba1a-793a8137c84b-part1', 'scsi-SQEMU_QEMU_HARDDISK_20d183f1-445d-49e2-ba1a-793a8137c84b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20d183f1-445d-49e2-ba1a-793a8137c84b-part14', 'scsi-SQEMU_QEMU_HARDDISK_20d183f1-445d-49e2-ba1a-793a8137c84b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20d183f1-445d-49e2-ba1a-793a8137c84b-part15', 'scsi-SQEMU_QEMU_HARDDISK_20d183f1-445d-49e2-ba1a-793a8137c84b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20d183f1-445d-49e2-ba1a-793a8137c84b-part16', 'scsi-SQEMU_QEMU_HARDDISK_20d183f1-445d-49e2-ba1a-793a8137c84b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-08 00:55:52.889573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-08-00-02-46-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-08 00:55:52.889581 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.889587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:55:52.889594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:55:52.889628 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:55:52.889637 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:55:52.889643 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.889650 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:55:52.889656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:55:52.890162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:55:52.890218 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:55:52.890359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c53b58f1-666b-45b2-9be0-abefaf2d6609', 'scsi-SQEMU_QEMU_HARDDISK_c53b58f1-666b-45b2-9be0-abefaf2d6609'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c53b58f1-666b-45b2-9be0-abefaf2d6609-part1', 'scsi-SQEMU_QEMU_HARDDISK_c53b58f1-666b-45b2-9be0-abefaf2d6609-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c53b58f1-666b-45b2-9be0-abefaf2d6609-part14', 'scsi-SQEMU_QEMU_HARDDISK_c53b58f1-666b-45b2-9be0-abefaf2d6609-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c53b58f1-666b-45b2-9be0-abefaf2d6609-part15', 'scsi-SQEMU_QEMU_HARDDISK_c53b58f1-666b-45b2-9be0-abefaf2d6609-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c53b58f1-666b-45b2-9be0-abefaf2d6609-part16', 'scsi-SQEMU_QEMU_HARDDISK_c53b58f1-666b-45b2-9be0-abefaf2d6609-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-08 00:55:52.890375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-08-00-02-44-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-08 00:55:52.890383 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:55:52.890397 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.890404 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:55:52.890411 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:55:52.890417 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:55:52.890444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:55:52.890452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:55:52.890457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:55:52.890467 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:55:52.890474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f151639-f215-41c0-9f83-a142594f7403', 'scsi-SQEMU_QEMU_HARDDISK_6f151639-f215-41c0-9f83-a142594f7403'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f151639-f215-41c0-9f83-a142594f7403-part1', 'scsi-SQEMU_QEMU_HARDDISK_6f151639-f215-41c0-9f83-a142594f7403-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f151639-f215-41c0-9f83-a142594f7403-part14', 'scsi-SQEMU_QEMU_HARDDISK_6f151639-f215-41c0-9f83-a142594f7403-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f151639-f215-41c0-9f83-a142594f7403-part15', 'scsi-SQEMU_QEMU_HARDDISK_6f151639-f215-41c0-9f83-a142594f7403-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f151639-f215-41c0-9f83-a142594f7403-part16', 'scsi-SQEMU_QEMU_HARDDISK_6f151639-f215-41c0-9f83-a142594f7403-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-08 00:55:52.890501 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-08-00-02-48-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-08 00:55:52.890508 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.890512 | orchestrator | 2026-03-08 00:55:52.890516 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-08 00:55:52.890521 | orchestrator | Sunday 08 March 2026 00:45:18 +0000 (0:00:01.319) 0:00:32.223 ********** 2026-03-08 00:55:52.890526 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--fb6eff58--5334--5828--9091--c0c39e64aeb1-osd--block--fb6eff58--5334--5828--9091--c0c39e64aeb1', 'dm-uuid-LVM-i9Xp5FUImtPtfN54C9ErRcykIZxaciZ8LXUwAGaSEtefK9rOU9kaKk7rZR7ptQZ6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:55:52.890537 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e3bef375--74a7--543b--9618--1787c99aecbb-osd--block--e3bef375--74a7--543b--9618--1787c99aecbb', 'dm-uuid-LVM-lHTKlioALzvrdCWxIUOY32laezYa9plhCTJmFyMIYqzt4GULUEK4IqtgTGpoAbH2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:55:52.890544 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:55:52.890555 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:55:52.890561 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:55:52.890583 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:55:52.890590 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:55:52.891164 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e9614fc2--8329--596c--937c--60ceb39d5fd3-osd--block--e9614fc2--8329--596c--937c--60ceb39d5fd3', 'dm-uuid-LVM-A6sX8tBZd3f7ouAe7LbLRKt8yUuKL0IDAxAZcluQUQudt0215DlOFuVxUcuxbYVY'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:55:52.891192 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:55:52.891203 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--eb569be8--41bf--5aa1--acb9--f145abad3137-osd--block--eb569be8--41bf--5aa1--acb9--f145abad3137', 'dm-uuid-LVM-sKJiMMw0cExulSsyIHg8glBLvDfU3ZtqvP3kpDXrQBSsu6FbQiwuhHaTocE12knM'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:55:52.891207 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:55:52.891253 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:55:52.891259 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:55:52.891263 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:55:52.891271 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:55:52.891307 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c560df89-ac9f-43eb-b629-a1334440ff2f', 'scsi-SQEMU_QEMU_HARDDISK_c560df89-ac9f-43eb-b629-a1334440ff2f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c560df89-ac9f-43eb-b629-a1334440ff2f-part1', 'scsi-SQEMU_QEMU_HARDDISK_c560df89-ac9f-43eb-b629-a1334440ff2f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c560df89-ac9f-43eb-b629-a1334440ff2f-part14', 'scsi-SQEMU_QEMU_HARDDISK_c560df89-ac9f-43eb-b629-a1334440ff2f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c560df89-ac9f-43eb-b629-a1334440ff2f-part15', 'scsi-SQEMU_QEMU_HARDDISK_c560df89-ac9f-43eb-b629-a1334440ff2f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c560df89-ac9f-43eb-b629-a1334440ff2f-part16', 'scsi-SQEMU_QEMU_HARDDISK_c560df89-ac9f-43eb-b629-a1334440ff2f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:55:52.891313 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:55:52.891317 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:55:52.891325 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--fb6eff58--5334--5828--9091--c0c39e64aeb1-osd--block--fb6eff58--5334--5828--9091--c0c39e64aeb1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-CgeyGY-4o5N-jaLE-Ybsd-Xi8d-yVB4-37QTGL', 'scsi-0QEMU_QEMU_HARDDISK_d9cf7a23-7f28-4003-9453-869e07fd4fea', 'scsi-SQEMU_QEMU_HARDDISK_d9cf7a23-7f28-4003-9453-869e07fd4fea'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:55:52.891334 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--e3bef375--74a7--543b--9618--1787c99aecbb-osd--block--e3bef375--74a7--543b--9618--1787c99aecbb'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-PF0bub-Ex82-boiQ-txFA-GEv1-V0IY-tU6VIs', 'scsi-0QEMU_QEMU_HARDDISK_26ccb454-a8ab-488a-9282-a29bd19f440f', 'scsi-SQEMU_QEMU_HARDDISK_26ccb454-a8ab-488a-9282-a29bd19f440f'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:55:52.891368 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f69177ca-c9b7-4ecf-919e-98158e504d7d', 'scsi-SQEMU_QEMU_HARDDISK_f69177ca-c9b7-4ecf-919e-98158e504d7d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:55:52.891375 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:55:52.891382 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-08-00-02-41-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:55:52.891397 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:55:52.891404 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:55:52.891592 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_544edfd2-ddc4-4596-85df-1c9b9e7c3b59', 'scsi-SQEMU_QEMU_HARDDISK_544edfd2-ddc4-4596-85df-1c9b9e7c3b59'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_544edfd2-ddc4-4596-85df-1c9b9e7c3b59-part1', 'scsi-SQEMU_QEMU_HARDDISK_544edfd2-ddc4-4596-85df-1c9b9e7c3b59-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_544edfd2-ddc4-4596-85df-1c9b9e7c3b59-part14', 'scsi-SQEMU_QEMU_HARDDISK_544edfd2-ddc4-4596-85df-1c9b9e7c3b59-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_544edfd2-ddc4-4596-85df-1c9b9e7c3b59-part15', 'scsi-SQEMU_QEMU_HARDDISK_544edfd2-ddc4-4596-85df-1c9b9e7c3b59-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_544edfd2-ddc4-4596-85df-1c9b9e7c3b59-part16', 'scsi-SQEMU_QEMU_HARDDISK_544edfd2-ddc4-4596-85df-1c9b9e7c3b59-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:55:52.891612 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5bde4b8d--c924--5d1f--8c9a--71f523250ead-osd--block--5bde4b8d--c924--5d1f--8c9a--71f523250ead', 'dm-uuid-LVM-nHFytykV0Xq8u8fjA5hGQa4Cn7XhkTNmvkeLvLgPXHeoyLboVG1ltbWGS54dxNZ6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:55:52.891626 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--e9614fc2--8329--596c--937c--60ceb39d5fd3-osd--block--e9614fc2--8329--596c--937c--60ceb39d5fd3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-HrRq55-4gpm-0vnp-o3sj-TvyH-5XAh-qNEgG1', 'scsi-0QEMU_QEMU_HARDDISK_581ffd65-22a4-4ef2-934b-fe47abf1be5c', 'scsi-SQEMU_QEMU_HARDDISK_581ffd65-22a4-4ef2-934b-fe47abf1be5c'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:55:52.891633 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ad275011--1eda--59d8--b818--a96e3c140717-osd--block--ad275011--1eda--59d8--b818--a96e3c140717', 'dm-uuid-LVM-52Zq5ucCtcvbGnpmAUTA1jJlUb8YWeRpVDHZgB300qIhha9jZhACuwUx3qWK1rRI'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:55:52.891639 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.891693 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--eb569be8--41bf--5aa1--acb9--f145abad3137-osd--block--eb569be8--41bf--5aa1--acb9--f145abad3137'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-AUddYJ-aAA1-Mgqt-B2eI-RKBS-JglY-blMXBN', 'scsi-0QEMU_QEMU_HARDDISK_2f73f377-a3b9-4553-a6d0-e21973e3a5e5', 'scsi-SQEMU_QEMU_HARDDISK_2f73f377-a3b9-4553-a6d0-e21973e3a5e5'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:55:52.891702 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:55:52.891712 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d4cf331-77e8-4e4e-b490-10f0636e01e9', 'scsi-SQEMU_QEMU_HARDDISK_1d4cf331-77e8-4e4e-b490-10f0636e01e9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:55:52.891726 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:55:52.891731 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-08-00-02-42-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:55:52.891735 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:55:52.891921 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:55:52.891935 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:55:52.891940 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:55:52.891966 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:55:52.891971 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:55:52.891975 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:55:52.891979 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:55:52.892040 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:55:52.892050 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:55:52.892067 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:55:52.892073 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:55:52.892079 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.892087 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:55:52.892093 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:55:52.892154 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20d183f1-445d-49e2-ba1a-793a8137c84b', 'scsi-SQEMU_QEMU_HARDDISK_20d183f1-445d-49e2-ba1a-793a8137c84b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20d183f1-445d-49e2-ba1a-793a8137c84b-part1', 'scsi-SQEMU_QEMU_HARDDISK_20d183f1-445d-49e2-ba1a-793a8137c84b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20d183f1-445d-49e2-ba1a-793a8137c84b-part14', 'scsi-SQEMU_QEMU_HARDDISK_20d183f1-445d-49e2-ba1a-793a8137c84b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20d183f1-445d-49e2-ba1a-793a8137c84b-part15', 'scsi-SQEMU_QEMU_HARDDISK_20d183f1-445d-49e2-ba1a-793a8137c84b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20d183f1-445d-49e2-ba1a-793a8137c84b-part16', 'scsi-SQEMU_QEMU_HARDDISK_20d183f1-445d-49e2-ba1a-793a8137c84b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:55:52.892169 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-08-00-02-46-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:55:52.892223 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1404ed60-298a-412c-bd4f-1e90f35345d3', 'scsi-SQEMU_QEMU_HARDDISK_1404ed60-298a-412c-bd4f-1e90f35345d3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1404ed60-298a-412c-bd4f-1e90f35345d3-part1', 'scsi-SQEMU_QEMU_HARDDISK_1404ed60-298a-412c-bd4f-1e90f35345d3-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1404ed60-298a-412c-bd4f-1e90f35345d3-part14', 'scsi-SQEMU_QEMU_HARDDISK_1404ed60-298a-412c-bd4f-1e90f35345d3-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1404ed60-298a-412c-bd4f-1e90f35345d3-part15', 'scsi-SQEMU_QEMU_HARDDISK_1404ed60-298a-412c-bd4f-1e90f35345d3-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1404ed60-298a-412c-bd4f-1e90f35345d3-part16', 'scsi-SQEMU_QEMU_HARDDISK_1404ed60-298a-412c-bd4f-1e90f35345d3-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:55:52.892248 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--5bde4b8d--c924--5d1f--8c9a--71f523250ead-osd--block--5bde4b8d--c924--5d1f--8c9a--71f523250ead'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-M4Otux-6dey-Ma9v-e8Ja-5EGJ-G046-HaA2BM', 'scsi-0QEMU_QEMU_HARDDISK_a9abd44a-efa3-4fc9-810c-e4cec7375a49', 'scsi-SQEMU_QEMU_HARDDISK_a9abd44a-efa3-4fc9-810c-e4cec7375a49'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:55:52.892260 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--ad275011--1eda--59d8--b818--a96e3c140717-osd--block--ad275011--1eda--59d8--b818--a96e3c140717'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-7Xg0Dw-SHI6-t4Km-ifTF-zLGd-Zegk-01LUBG', 'scsi-0QEMU_QEMU_HARDDISK_70953687-69fa-4056-8e35-7089ee1c64ea', 'scsi-SQEMU_QEMU_HARDDISK_70953687-69fa-4056-8e35-7089ee1c64ea'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:55:52.892270 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7bc88367-6aaf-4ded-8fa4-f9240096c464', 'scsi-SQEMU_QEMU_HARDDISK_7bc88367-6aaf-4ded-8fa4-f9240096c464'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:55:52.892308 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-08-00-02-50-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:55:52.892316 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:55:52.892328 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:55:52.892338 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:55:52.892344 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:55:52.892349 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:55:52.892356 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:55:52.892400 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:55:52.892411 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:55:52.892419 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c53b58f1-666b-45b2-9be0-abefaf2d6609', 'scsi-SQEMU_QEMU_HARDDISK_c53b58f1-666b-45b2-9be0-abefaf2d6609'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c53b58f1-666b-45b2-9be0-abefaf2d6609-part1', 'scsi-SQEMU_QEMU_HARDDISK_c53b58f1-666b-45b2-9be0-abefaf2d6609-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c53b58f1-666b-45b2-9be0-abefaf2d6609-part14', 'scsi-SQEMU_QEMU_HARDDISK_c53b58f1-666b-45b2-9be0-abefaf2d6609-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c53b58f1-666b-45b2-9be0-abefaf2d6609-part15', 'scsi-SQEMU_QEMU_HARDDISK_c53b58f1-666b-45b2-9be0-abefaf2d6609-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c53b58f1-666b-45b2-9be0-abefaf2d6609-part16', 'scsi-SQEMU_QEMU_HARDDISK_c53b58f1-666b-45b2-9be0-abefaf2d6609-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:55:52.892424 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-08-00-02-44-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:55:52.892456 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.892461 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.892465 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.892476 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:55:52.892480 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:55:52.892487 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:55:52.892491 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:55:52.892495 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:55:52.892498 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:55:52.892531 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:55:52.892541 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:55:52.892549 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f151639-f215-41c0-9f83-a142594f7403', 'scsi-SQEMU_QEMU_HARDDISK_6f151639-f215-41c0-9f83-a142594f7403'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f151639-f215-41c0-9f83-a142594f7403-part1', 'scsi-SQEMU_QEMU_HARDDISK_6f151639-f215-41c0-9f83-a142594f7403-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f151639-f215-41c0-9f83-a142594f7403-part14', 'scsi-SQEMU_QEMU_HARDDISK_6f151639-f215-41c0-9f83-a142594f7403-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f151639-f215-41c0-9f83-a142594f7403-part15', 'scsi-SQEMU_QEMU_HARDDISK_6f151639-f215-41c0-9f83-a142594f7403-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f151639-f215-41c0-9f83-a142594f7403-part16', 'scsi-SQEMU_QEMU_HARDDISK_6f151639-f215-41c0-9f83-a142594f7403-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:55:52.892553 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-08-00-02-48-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:55:52.892562 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.892566 | orchestrator | 2026-03-08 00:55:52.892595 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-08 00:55:52.892601 | orchestrator | Sunday 08 March 2026 00:45:20 +0000 (0:00:02.120) 0:00:34.343 ********** 2026-03-08 00:55:52.892605 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:55:52.892610 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:55:52.892613 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:55:52.892617 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:55:52.892621 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:55:52.892624 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:55:52.892628 | orchestrator | 2026-03-08 00:55:52.892632 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-08 00:55:52.892636 | orchestrator | Sunday 08 March 2026 00:45:22 +0000 (0:00:01.946) 0:00:36.289 ********** 2026-03-08 00:55:52.892639 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:55:52.892643 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:55:52.892647 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:55:52.892650 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:55:52.892654 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:55:52.892658 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:55:52.892662 | orchestrator | 2026-03-08 00:55:52.892665 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-08 00:55:52.892669 | orchestrator | Sunday 08 March 2026 00:45:23 +0000 (0:00:00.837) 0:00:37.127 ********** 2026-03-08 00:55:52.892673 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.892677 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.892680 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.892684 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.892688 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.892692 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.892695 | orchestrator | 2026-03-08 00:55:52.892699 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-08 00:55:52.892703 | orchestrator | Sunday 08 March 2026 00:45:24 +0000 (0:00:00.989) 0:00:38.117 ********** 2026-03-08 00:55:52.892707 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.892711 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.892714 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.892718 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.892722 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.892725 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.892729 | orchestrator | 2026-03-08 00:55:52.892733 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-08 00:55:52.892739 | orchestrator | Sunday 08 March 2026 00:45:25 +0000 (0:00:00.873) 0:00:38.990 ********** 2026-03-08 00:55:52.892743 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.892747 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.892750 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.892754 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.892758 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.892762 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.892765 | orchestrator | 2026-03-08 00:55:52.892769 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-08 00:55:52.892790 | orchestrator | Sunday 08 March 2026 00:45:26 +0000 (0:00:01.331) 0:00:40.322 ********** 2026-03-08 00:55:52.892794 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.892798 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.892802 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.892806 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.892809 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.892813 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.892817 | orchestrator | 2026-03-08 00:55:52.892821 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-08 00:55:52.892829 | orchestrator | Sunday 08 March 2026 00:45:27 +0000 (0:00:00.872) 0:00:41.194 ********** 2026-03-08 00:55:52.892833 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-08 00:55:52.892837 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-08 00:55:52.892841 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-08 00:55:52.892845 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-08 00:55:52.892848 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-08 00:55:52.892852 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-03-08 00:55:52.892856 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-08 00:55:52.892909 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-08 00:55:52.892915 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-08 00:55:52.892919 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-08 00:55:52.892922 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-08 00:55:52.892926 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-03-08 00:55:52.892930 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-03-08 00:55:52.892933 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-08 00:55:52.892937 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-08 00:55:52.892941 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-03-08 00:55:52.892944 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-08 00:55:52.892948 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-08 00:55:52.892952 | orchestrator | 2026-03-08 00:55:52.892956 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-08 00:55:52.892960 | orchestrator | Sunday 08 March 2026 00:45:30 +0000 (0:00:03.174) 0:00:44.368 ********** 2026-03-08 00:55:52.892971 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-08 00:55:52.892975 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-08 00:55:52.892979 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-08 00:55:52.892983 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-08 00:55:52.892986 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-08 00:55:52.892990 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-08 00:55:52.893026 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.893030 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-08 00:55:52.893072 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-08 00:55:52.893078 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-08 00:55:52.893082 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.893085 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-08 00:55:52.893089 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-08 00:55:52.893093 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-08 00:55:52.893097 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.893100 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-08 00:55:52.893104 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-08 00:55:52.893108 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-08 00:55:52.893112 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.893116 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.893119 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-08 00:55:52.893123 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-08 00:55:52.893127 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-08 00:55:52.893130 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.893134 | orchestrator | 2026-03-08 00:55:52.893138 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-08 00:55:52.893147 | orchestrator | Sunday 08 March 2026 00:45:31 +0000 (0:00:01.044) 0:00:45.412 ********** 2026-03-08 00:55:52.893150 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.893154 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.893158 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.893162 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-5, testbed-node-4 2026-03-08 00:55:52.893166 | orchestrator | 2026-03-08 00:55:52.893170 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-08 00:55:52.893175 | orchestrator | Sunday 08 March 2026 00:45:33 +0000 (0:00:01.447) 0:00:46.860 ********** 2026-03-08 00:55:52.893179 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.893182 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.893189 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.893193 | orchestrator | 2026-03-08 00:55:52.893197 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-08 00:55:52.893201 | orchestrator | Sunday 08 March 2026 00:45:33 +0000 (0:00:00.327) 0:00:47.188 ********** 2026-03-08 00:55:52.893204 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.893208 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.893212 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.893216 | orchestrator | 2026-03-08 00:55:52.893220 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-08 00:55:52.893223 | orchestrator | Sunday 08 March 2026 00:45:33 +0000 (0:00:00.438) 0:00:47.626 ********** 2026-03-08 00:55:52.893227 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.893231 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.893234 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.893238 | orchestrator | 2026-03-08 00:55:52.893242 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-08 00:55:52.893246 | orchestrator | Sunday 08 March 2026 00:45:34 +0000 (0:00:00.604) 0:00:48.230 ********** 2026-03-08 00:55:52.893250 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:55:52.893253 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:55:52.893257 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:55:52.893261 | orchestrator | 2026-03-08 00:55:52.893265 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-08 00:55:52.893268 | orchestrator | Sunday 08 March 2026 00:45:34 +0000 (0:00:00.506) 0:00:48.736 ********** 2026-03-08 00:55:52.893272 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-08 00:55:52.893276 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-08 00:55:52.893280 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-08 00:55:52.893283 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.893287 | orchestrator | 2026-03-08 00:55:52.893291 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-08 00:55:52.893295 | orchestrator | Sunday 08 March 2026 00:45:35 +0000 (0:00:00.426) 0:00:49.163 ********** 2026-03-08 00:55:52.893298 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-08 00:55:52.893302 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-08 00:55:52.893306 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-08 00:55:52.893310 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.893314 | orchestrator | 2026-03-08 00:55:52.893317 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-08 00:55:52.893321 | orchestrator | Sunday 08 March 2026 00:45:36 +0000 (0:00:00.604) 0:00:49.767 ********** 2026-03-08 00:55:52.893325 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-08 00:55:52.893328 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-08 00:55:52.893332 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-08 00:55:52.893339 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.893343 | orchestrator | 2026-03-08 00:55:52.893347 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-08 00:55:52.893351 | orchestrator | Sunday 08 March 2026 00:45:36 +0000 (0:00:00.561) 0:00:50.328 ********** 2026-03-08 00:55:52.893354 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:55:52.893358 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:55:52.893362 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:55:52.893366 | orchestrator | 2026-03-08 00:55:52.893369 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-08 00:55:52.893373 | orchestrator | Sunday 08 March 2026 00:45:37 +0000 (0:00:00.888) 0:00:51.217 ********** 2026-03-08 00:55:52.893377 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-08 00:55:52.893381 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-08 00:55:52.893397 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-08 00:55:52.893402 | orchestrator | 2026-03-08 00:55:52.893406 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-08 00:55:52.893409 | orchestrator | Sunday 08 March 2026 00:45:39 +0000 (0:00:02.269) 0:00:53.488 ********** 2026-03-08 00:55:52.893413 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-08 00:55:52.893417 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-08 00:55:52.893421 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-08 00:55:52.893425 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-08 00:55:52.893437 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-08 00:55:52.893441 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-08 00:55:52.893445 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-08 00:55:52.893448 | orchestrator | 2026-03-08 00:55:52.893452 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-08 00:55:52.893456 | orchestrator | Sunday 08 March 2026 00:45:40 +0000 (0:00:00.892) 0:00:54.380 ********** 2026-03-08 00:55:52.893460 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-08 00:55:52.893463 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-08 00:55:52.893467 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-08 00:55:52.893471 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-08 00:55:52.893475 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-08 00:55:52.893478 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-08 00:55:52.893485 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-08 00:55:52.893489 | orchestrator | 2026-03-08 00:55:52.893493 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-08 00:55:52.893497 | orchestrator | Sunday 08 March 2026 00:45:42 +0000 (0:00:01.942) 0:00:56.323 ********** 2026-03-08 00:55:52.893501 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:55:52.893506 | orchestrator | 2026-03-08 00:55:52.893510 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-08 00:55:52.893513 | orchestrator | Sunday 08 March 2026 00:45:43 +0000 (0:00:01.066) 0:00:57.389 ********** 2026-03-08 00:55:52.893517 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:55:52.893521 | orchestrator | 2026-03-08 00:55:52.893525 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-08 00:55:52.893532 | orchestrator | Sunday 08 March 2026 00:45:44 +0000 (0:00:01.290) 0:00:58.680 ********** 2026-03-08 00:55:52.893536 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.893540 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.893543 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.893547 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:55:52.893551 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:55:52.893555 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:55:52.893558 | orchestrator | 2026-03-08 00:55:52.893562 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-08 00:55:52.893566 | orchestrator | Sunday 08 March 2026 00:45:46 +0000 (0:00:01.298) 0:00:59.978 ********** 2026-03-08 00:55:52.893570 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.893573 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:55:52.893577 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.893581 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:55:52.893585 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:55:52.893588 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.893592 | orchestrator | 2026-03-08 00:55:52.893596 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-08 00:55:52.893599 | orchestrator | Sunday 08 March 2026 00:45:47 +0000 (0:00:01.027) 0:01:01.006 ********** 2026-03-08 00:55:52.893603 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:55:52.893607 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:55:52.893611 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.893614 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:55:52.893618 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.893622 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.893626 | orchestrator | 2026-03-08 00:55:52.893629 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-08 00:55:52.893633 | orchestrator | Sunday 08 March 2026 00:45:48 +0000 (0:00:01.636) 0:01:02.643 ********** 2026-03-08 00:55:52.893637 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:55:52.893641 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.893644 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:55:52.893648 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.893652 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.893655 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:55:52.893659 | orchestrator | 2026-03-08 00:55:52.893663 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-08 00:55:52.893667 | orchestrator | Sunday 08 March 2026 00:45:49 +0000 (0:00:00.866) 0:01:03.509 ********** 2026-03-08 00:55:52.893671 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.893674 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.893678 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.893682 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:55:52.893685 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:55:52.893703 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:55:52.893707 | orchestrator | 2026-03-08 00:55:52.893712 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-08 00:55:52.893716 | orchestrator | Sunday 08 March 2026 00:45:51 +0000 (0:00:01.248) 0:01:04.758 ********** 2026-03-08 00:55:52.893721 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.893725 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.893729 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.893733 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.893738 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.893742 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.893746 | orchestrator | 2026-03-08 00:55:52.893751 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-08 00:55:52.893755 | orchestrator | Sunday 08 March 2026 00:45:51 +0000 (0:00:00.826) 0:01:05.584 ********** 2026-03-08 00:55:52.893759 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.893764 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.893771 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.893775 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.893779 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.893784 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.893788 | orchestrator | 2026-03-08 00:55:52.893792 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-08 00:55:52.893797 | orchestrator | Sunday 08 March 2026 00:45:52 +0000 (0:00:00.958) 0:01:06.543 ********** 2026-03-08 00:55:52.893801 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:55:52.893805 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:55:52.893810 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:55:52.893814 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:55:52.893818 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:55:52.893822 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:55:52.893827 | orchestrator | 2026-03-08 00:55:52.893831 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-08 00:55:52.893835 | orchestrator | Sunday 08 March 2026 00:45:53 +0000 (0:00:00.959) 0:01:07.502 ********** 2026-03-08 00:55:52.893840 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:55:52.893844 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:55:52.893849 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:55:52.893853 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:55:52.893857 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:55:52.893879 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:55:52.893885 | orchestrator | 2026-03-08 00:55:52.893895 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-08 00:55:52.893901 | orchestrator | Sunday 08 March 2026 00:45:54 +0000 (0:00:01.113) 0:01:08.616 ********** 2026-03-08 00:55:52.893907 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.893913 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.893919 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.893926 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.893932 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.893938 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.893944 | orchestrator | 2026-03-08 00:55:52.893954 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-08 00:55:52.893962 | orchestrator | Sunday 08 March 2026 00:45:55 +0000 (0:00:00.604) 0:01:09.221 ********** 2026-03-08 00:55:52.893968 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.893973 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.893979 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.893985 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:55:52.893991 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:55:52.893997 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:55:52.894003 | orchestrator | 2026-03-08 00:55:52.894009 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-08 00:55:52.894044 | orchestrator | Sunday 08 March 2026 00:45:56 +0000 (0:00:00.903) 0:01:10.124 ********** 2026-03-08 00:55:52.894050 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:55:52.894056 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:55:52.894062 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:55:52.894068 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.894074 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.894080 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.894085 | orchestrator | 2026-03-08 00:55:52.894091 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-08 00:55:52.894097 | orchestrator | Sunday 08 March 2026 00:45:57 +0000 (0:00:00.654) 0:01:10.779 ********** 2026-03-08 00:55:52.894103 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:55:52.894110 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:55:52.894116 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:55:52.894122 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.894128 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.894134 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.894140 | orchestrator | 2026-03-08 00:55:52.894163 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-08 00:55:52.894169 | orchestrator | Sunday 08 March 2026 00:45:58 +0000 (0:00:01.021) 0:01:11.800 ********** 2026-03-08 00:55:52.894173 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:55:52.894177 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:55:52.894181 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:55:52.894184 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.894188 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.894192 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.894195 | orchestrator | 2026-03-08 00:55:52.894199 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-08 00:55:52.894203 | orchestrator | Sunday 08 March 2026 00:45:58 +0000 (0:00:00.898) 0:01:12.698 ********** 2026-03-08 00:55:52.894207 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.894211 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.894215 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.894218 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.894222 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.894226 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.894229 | orchestrator | 2026-03-08 00:55:52.894234 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-08 00:55:52.894240 | orchestrator | Sunday 08 March 2026 00:45:59 +0000 (0:00:00.988) 0:01:13.687 ********** 2026-03-08 00:55:52.894247 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.894252 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.894258 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.894263 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.894302 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.894309 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.894315 | orchestrator | 2026-03-08 00:55:52.894323 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-08 00:55:52.894327 | orchestrator | Sunday 08 March 2026 00:46:01 +0000 (0:00:01.274) 0:01:14.961 ********** 2026-03-08 00:55:52.894330 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.894334 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.894338 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.894342 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:55:52.894345 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:55:52.894349 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:55:52.894353 | orchestrator | 2026-03-08 00:55:52.894357 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-08 00:55:52.894360 | orchestrator | Sunday 08 March 2026 00:46:02 +0000 (0:00:00.938) 0:01:15.901 ********** 2026-03-08 00:55:52.894364 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:55:52.894368 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:55:52.894372 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:55:52.894375 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:55:52.894379 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:55:52.894383 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:55:52.894386 | orchestrator | 2026-03-08 00:55:52.894390 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-08 00:55:52.894394 | orchestrator | Sunday 08 March 2026 00:46:03 +0000 (0:00:00.892) 0:01:16.794 ********** 2026-03-08 00:55:52.894398 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:55:52.894401 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:55:52.894405 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:55:52.894409 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:55:52.894412 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:55:52.894416 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:55:52.894420 | orchestrator | 2026-03-08 00:55:52.894424 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-08 00:55:52.894427 | orchestrator | Sunday 08 March 2026 00:46:04 +0000 (0:00:01.590) 0:01:18.384 ********** 2026-03-08 00:55:52.894431 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:55:52.894442 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:55:52.894446 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:55:52.894450 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:55:52.894454 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:55:52.894457 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:55:52.894461 | orchestrator | 2026-03-08 00:55:52.894469 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-08 00:55:52.894473 | orchestrator | Sunday 08 March 2026 00:46:07 +0000 (0:00:02.580) 0:01:20.965 ********** 2026-03-08 00:55:52.894477 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:55:52.894481 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:55:52.894484 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:55:52.894488 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:55:52.894492 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:55:52.894496 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:55:52.894499 | orchestrator | 2026-03-08 00:55:52.894503 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-08 00:55:52.894507 | orchestrator | Sunday 08 March 2026 00:46:10 +0000 (0:00:02.936) 0:01:23.901 ********** 2026-03-08 00:55:52.894510 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:55:52.894515 | orchestrator | 2026-03-08 00:55:52.894519 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-08 00:55:52.894523 | orchestrator | Sunday 08 March 2026 00:46:11 +0000 (0:00:01.816) 0:01:25.718 ********** 2026-03-08 00:55:52.894526 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.894530 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.894534 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.894538 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.894542 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.894545 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.894549 | orchestrator | 2026-03-08 00:55:52.894553 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-08 00:55:52.894557 | orchestrator | Sunday 08 March 2026 00:46:12 +0000 (0:00:00.953) 0:01:26.671 ********** 2026-03-08 00:55:52.894560 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.894564 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.894568 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.894571 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.894575 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.894579 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.894583 | orchestrator | 2026-03-08 00:55:52.894587 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-08 00:55:52.894590 | orchestrator | Sunday 08 March 2026 00:46:13 +0000 (0:00:00.939) 0:01:27.611 ********** 2026-03-08 00:55:52.894594 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-08 00:55:52.894598 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-08 00:55:52.894602 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-08 00:55:52.894605 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-08 00:55:52.894609 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-08 00:55:52.894613 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-08 00:55:52.894617 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-08 00:55:52.894621 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-08 00:55:52.894625 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-08 00:55:52.894628 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-08 00:55:52.894650 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-08 00:55:52.894654 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-08 00:55:52.894658 | orchestrator | 2026-03-08 00:55:52.894662 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-08 00:55:52.894666 | orchestrator | Sunday 08 March 2026 00:46:15 +0000 (0:00:01.771) 0:01:29.382 ********** 2026-03-08 00:55:52.894669 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:55:52.894673 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:55:52.894677 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:55:52.894682 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:55:52.894688 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:55:52.894694 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:55:52.894700 | orchestrator | 2026-03-08 00:55:52.894710 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-08 00:55:52.894718 | orchestrator | Sunday 08 March 2026 00:46:17 +0000 (0:00:01.536) 0:01:30.919 ********** 2026-03-08 00:55:52.894723 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.894729 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.894734 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.894740 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.894745 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.894750 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.894756 | orchestrator | 2026-03-08 00:55:52.894762 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-08 00:55:52.894768 | orchestrator | Sunday 08 March 2026 00:46:17 +0000 (0:00:00.603) 0:01:31.522 ********** 2026-03-08 00:55:52.894773 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.894779 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.894785 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.894790 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.894796 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.894801 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.894807 | orchestrator | 2026-03-08 00:55:52.894813 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-08 00:55:52.894819 | orchestrator | Sunday 08 March 2026 00:46:18 +0000 (0:00:00.798) 0:01:32.321 ********** 2026-03-08 00:55:52.894824 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.894834 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.894840 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.894847 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.894852 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.894859 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.894882 | orchestrator | 2026-03-08 00:55:52.894888 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-08 00:55:52.894894 | orchestrator | Sunday 08 March 2026 00:46:19 +0000 (0:00:00.615) 0:01:32.936 ********** 2026-03-08 00:55:52.894901 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:55:52.894907 | orchestrator | 2026-03-08 00:55:52.894913 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-08 00:55:52.894919 | orchestrator | Sunday 08 March 2026 00:46:20 +0000 (0:00:01.265) 0:01:34.202 ********** 2026-03-08 00:55:52.894924 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:55:52.894931 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:55:52.894936 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:55:52.894942 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:55:52.894948 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:55:52.894954 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:55:52.894960 | orchestrator | 2026-03-08 00:55:52.894966 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-08 00:55:52.894974 | orchestrator | Sunday 08 March 2026 00:47:09 +0000 (0:00:48.897) 0:02:23.100 ********** 2026-03-08 00:55:52.894978 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-08 00:55:52.894982 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-08 00:55:52.894986 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-08 00:55:52.894989 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.894993 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-08 00:55:52.894997 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-08 00:55:52.895000 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-08 00:55:52.895004 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.895008 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-08 00:55:52.895012 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-08 00:55:52.895016 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-08 00:55:52.895019 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.895023 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-08 00:55:52.895027 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-08 00:55:52.895031 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-08 00:55:52.895035 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.895038 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-08 00:55:52.895043 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-08 00:55:52.895046 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-08 00:55:52.895050 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.895075 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-08 00:55:52.895080 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-08 00:55:52.895085 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-08 00:55:52.895091 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.895099 | orchestrator | 2026-03-08 00:55:52.895108 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-08 00:55:52.895113 | orchestrator | Sunday 08 March 2026 00:47:09 +0000 (0:00:00.520) 0:02:23.620 ********** 2026-03-08 00:55:52.895119 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.895125 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.895132 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.895138 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.895143 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.895149 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.895156 | orchestrator | 2026-03-08 00:55:52.895162 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-08 00:55:52.895170 | orchestrator | Sunday 08 March 2026 00:47:10 +0000 (0:00:00.788) 0:02:24.409 ********** 2026-03-08 00:55:52.895178 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.895184 | orchestrator | 2026-03-08 00:55:52.895190 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-08 00:55:52.895196 | orchestrator | Sunday 08 March 2026 00:47:10 +0000 (0:00:00.107) 0:02:24.517 ********** 2026-03-08 00:55:52.895202 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.895207 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.895213 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.895219 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.895224 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.895235 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.895240 | orchestrator | 2026-03-08 00:55:52.895246 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-08 00:55:52.895253 | orchestrator | Sunday 08 March 2026 00:47:11 +0000 (0:00:00.560) 0:02:25.078 ********** 2026-03-08 00:55:52.895258 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.895265 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.895271 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.895277 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.895288 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.895295 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.895300 | orchestrator | 2026-03-08 00:55:52.895304 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-08 00:55:52.895308 | orchestrator | Sunday 08 March 2026 00:47:11 +0000 (0:00:00.656) 0:02:25.735 ********** 2026-03-08 00:55:52.895311 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.895315 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.895319 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.895323 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.895326 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.895330 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.895334 | orchestrator | 2026-03-08 00:55:52.895339 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-08 00:55:52.895344 | orchestrator | Sunday 08 March 2026 00:47:12 +0000 (0:00:00.572) 0:02:26.307 ********** 2026-03-08 00:55:52.895350 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:55:52.895356 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:55:52.895361 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:55:52.895367 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:55:52.895373 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:55:52.895379 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:55:52.895385 | orchestrator | 2026-03-08 00:55:52.895390 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-08 00:55:52.895396 | orchestrator | Sunday 08 March 2026 00:47:15 +0000 (0:00:02.531) 0:02:28.839 ********** 2026-03-08 00:55:52.895401 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:55:52.895406 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:55:52.895412 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:55:52.895417 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:55:52.895422 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:55:52.895428 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:55:52.895433 | orchestrator | 2026-03-08 00:55:52.895439 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-08 00:55:52.895445 | orchestrator | Sunday 08 March 2026 00:47:15 +0000 (0:00:00.617) 0:02:29.457 ********** 2026-03-08 00:55:52.895451 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:55:52.895458 | orchestrator | 2026-03-08 00:55:52.895463 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-08 00:55:52.895469 | orchestrator | Sunday 08 March 2026 00:47:16 +0000 (0:00:01.121) 0:02:30.578 ********** 2026-03-08 00:55:52.895475 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.895480 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.895486 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.895492 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.895497 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.895503 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.895510 | orchestrator | 2026-03-08 00:55:52.895516 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-08 00:55:52.895522 | orchestrator | Sunday 08 March 2026 00:47:17 +0000 (0:00:00.707) 0:02:31.285 ********** 2026-03-08 00:55:52.895529 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.895535 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.895548 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.895554 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.895560 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.895567 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.895571 | orchestrator | 2026-03-08 00:55:52.895575 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-08 00:55:52.895578 | orchestrator | Sunday 08 March 2026 00:47:18 +0000 (0:00:00.595) 0:02:31.881 ********** 2026-03-08 00:55:52.895582 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.895586 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.895630 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.895635 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.895639 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.895642 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.895646 | orchestrator | 2026-03-08 00:55:52.895650 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-08 00:55:52.895654 | orchestrator | Sunday 08 March 2026 00:47:18 +0000 (0:00:00.721) 0:02:32.603 ********** 2026-03-08 00:55:52.895657 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.895661 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.895665 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.895669 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.895673 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.895676 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.895680 | orchestrator | 2026-03-08 00:55:52.895684 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-08 00:55:52.895688 | orchestrator | Sunday 08 March 2026 00:47:19 +0000 (0:00:00.760) 0:02:33.363 ********** 2026-03-08 00:55:52.895692 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.895696 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.895699 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.895703 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.895707 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.895711 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.895714 | orchestrator | 2026-03-08 00:55:52.895718 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-08 00:55:52.895722 | orchestrator | Sunday 08 March 2026 00:47:20 +0000 (0:00:00.848) 0:02:34.211 ********** 2026-03-08 00:55:52.895726 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.895730 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.895734 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.895737 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.895741 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.895745 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.895749 | orchestrator | 2026-03-08 00:55:52.895752 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-08 00:55:52.895756 | orchestrator | Sunday 08 March 2026 00:47:21 +0000 (0:00:00.788) 0:02:35.000 ********** 2026-03-08 00:55:52.895760 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.895764 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.895772 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.895776 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.895779 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.895783 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.895787 | orchestrator | 2026-03-08 00:55:52.895791 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-08 00:55:52.895794 | orchestrator | Sunday 08 March 2026 00:47:22 +0000 (0:00:00.983) 0:02:35.983 ********** 2026-03-08 00:55:52.895798 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.895802 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.895807 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.895813 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.895819 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.895835 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.895842 | orchestrator | 2026-03-08 00:55:52.895848 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-08 00:55:52.895854 | orchestrator | Sunday 08 March 2026 00:47:22 +0000 (0:00:00.715) 0:02:36.698 ********** 2026-03-08 00:55:52.895900 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:55:52.895908 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:55:52.895913 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:55:52.895919 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:55:52.895925 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:55:52.895930 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:55:52.895936 | orchestrator | 2026-03-08 00:55:52.895941 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-08 00:55:52.895947 | orchestrator | Sunday 08 March 2026 00:47:24 +0000 (0:00:01.333) 0:02:38.032 ********** 2026-03-08 00:55:52.895953 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:55:52.895959 | orchestrator | 2026-03-08 00:55:52.895965 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-08 00:55:52.895971 | orchestrator | Sunday 08 March 2026 00:47:25 +0000 (0:00:01.132) 0:02:39.164 ********** 2026-03-08 00:55:52.895977 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-03-08 00:55:52.895983 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-03-08 00:55:52.895989 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-03-08 00:55:52.895995 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-03-08 00:55:52.896001 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-03-08 00:55:52.896007 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-03-08 00:55:52.896013 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-03-08 00:55:52.896019 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-03-08 00:55:52.896025 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-03-08 00:55:52.896031 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-03-08 00:55:52.896037 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-03-08 00:55:52.896042 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-03-08 00:55:52.896048 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-03-08 00:55:52.896054 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-03-08 00:55:52.896059 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-03-08 00:55:52.896067 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-03-08 00:55:52.896072 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-03-08 00:55:52.896078 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-03-08 00:55:52.896115 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-03-08 00:55:52.896122 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-03-08 00:55:52.896128 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-03-08 00:55:52.896133 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-03-08 00:55:52.896139 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-03-08 00:55:52.896145 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-03-08 00:55:52.896150 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-03-08 00:55:52.896157 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-03-08 00:55:52.896163 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-03-08 00:55:52.896169 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-03-08 00:55:52.896175 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-03-08 00:55:52.896194 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-03-08 00:55:52.896201 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-03-08 00:55:52.896208 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-03-08 00:55:52.896215 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-03-08 00:55:52.896223 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-03-08 00:55:52.896230 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-03-08 00:55:52.896237 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-03-08 00:55:52.896245 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-03-08 00:55:52.896251 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-03-08 00:55:52.896257 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-03-08 00:55:52.896263 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-03-08 00:55:52.896269 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-03-08 00:55:52.896275 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-03-08 00:55:52.896286 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-03-08 00:55:52.896292 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-03-08 00:55:52.896298 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-08 00:55:52.896304 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-03-08 00:55:52.896309 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-03-08 00:55:52.896315 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-03-08 00:55:52.896321 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-03-08 00:55:52.896328 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-08 00:55:52.896333 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-08 00:55:52.896339 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-08 00:55:52.896345 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-08 00:55:52.896351 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-08 00:55:52.896357 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-08 00:55:52.896363 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-08 00:55:52.896369 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-08 00:55:52.896375 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-08 00:55:52.896381 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-08 00:55:52.896387 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-08 00:55:52.896393 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-08 00:55:52.896399 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-08 00:55:52.896404 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-08 00:55:52.896410 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-08 00:55:52.896416 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-08 00:55:52.896422 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-08 00:55:52.896428 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-08 00:55:52.896434 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-08 00:55:52.896439 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-08 00:55:52.896760 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-08 00:55:52.896841 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-08 00:55:52.896848 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-08 00:55:52.896855 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-08 00:55:52.896880 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-08 00:55:52.896886 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-08 00:55:52.896892 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-08 00:55:52.896953 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-08 00:55:52.896961 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-08 00:55:52.896965 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-08 00:55:52.896969 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-03-08 00:55:52.896974 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-08 00:55:52.896980 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-08 00:55:52.896986 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-08 00:55:52.896992 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-08 00:55:52.896998 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-03-08 00:55:52.897003 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-08 00:55:52.897010 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-03-08 00:55:52.897016 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-03-08 00:55:52.897022 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-03-08 00:55:52.897028 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-03-08 00:55:52.897034 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-03-08 00:55:52.897040 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-03-08 00:55:52.897046 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-03-08 00:55:52.897051 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-03-08 00:55:52.897058 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-03-08 00:55:52.897066 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-03-08 00:55:52.897075 | orchestrator | 2026-03-08 00:55:52.897086 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-08 00:55:52.897099 | orchestrator | Sunday 08 March 2026 00:47:32 +0000 (0:00:07.004) 0:02:46.168 ********** 2026-03-08 00:55:52.897110 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.897121 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.897247 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.897257 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 00:55:52.897264 | orchestrator | 2026-03-08 00:55:52.897271 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-08 00:55:52.897278 | orchestrator | Sunday 08 March 2026 00:47:33 +0000 (0:00:01.022) 0:02:47.190 ********** 2026-03-08 00:55:52.897284 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-08 00:55:52.897290 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-08 00:55:52.897296 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-08 00:55:52.897302 | orchestrator | 2026-03-08 00:55:52.897308 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-08 00:55:52.897325 | orchestrator | Sunday 08 March 2026 00:47:34 +0000 (0:00:01.190) 0:02:48.381 ********** 2026-03-08 00:55:52.897331 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-08 00:55:52.897337 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-08 00:55:52.897343 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-08 00:55:52.897349 | orchestrator | 2026-03-08 00:55:52.897356 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-08 00:55:52.897361 | orchestrator | Sunday 08 March 2026 00:47:35 +0000 (0:00:01.281) 0:02:49.662 ********** 2026-03-08 00:55:52.897367 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:55:52.897373 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:55:52.897378 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:55:52.897384 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.897392 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.897398 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.897404 | orchestrator | 2026-03-08 00:55:52.897409 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-08 00:55:52.897415 | orchestrator | Sunday 08 March 2026 00:47:36 +0000 (0:00:00.523) 0:02:50.186 ********** 2026-03-08 00:55:52.897421 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:55:52.897427 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:55:52.897433 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:55:52.897439 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.897444 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.897450 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.897457 | orchestrator | 2026-03-08 00:55:52.897463 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-08 00:55:52.897468 | orchestrator | Sunday 08 March 2026 00:47:37 +0000 (0:00:00.899) 0:02:51.086 ********** 2026-03-08 00:55:52.897474 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.897481 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.897486 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.897493 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.897499 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.897505 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.897513 | orchestrator | 2026-03-08 00:55:52.897563 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-08 00:55:52.897572 | orchestrator | Sunday 08 March 2026 00:47:37 +0000 (0:00:00.588) 0:02:51.675 ********** 2026-03-08 00:55:52.897579 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.897584 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.897589 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.897595 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.897601 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.897606 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.897612 | orchestrator | 2026-03-08 00:55:52.897618 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-08 00:55:52.897625 | orchestrator | Sunday 08 March 2026 00:47:38 +0000 (0:00:00.567) 0:02:52.242 ********** 2026-03-08 00:55:52.897631 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.897638 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.897645 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.897651 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.897657 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.897803 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.897815 | orchestrator | 2026-03-08 00:55:52.897821 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-08 00:55:52.897828 | orchestrator | Sunday 08 March 2026 00:47:38 +0000 (0:00:00.460) 0:02:52.703 ********** 2026-03-08 00:55:52.897848 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.897854 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.897875 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.897881 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.897887 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.897893 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.897898 | orchestrator | 2026-03-08 00:55:52.897904 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-08 00:55:52.897910 | orchestrator | Sunday 08 March 2026 00:47:39 +0000 (0:00:00.790) 0:02:53.493 ********** 2026-03-08 00:55:52.897917 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.897922 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.897928 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.897933 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.897939 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.897945 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.897951 | orchestrator | 2026-03-08 00:55:52.897958 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-08 00:55:52.897964 | orchestrator | Sunday 08 March 2026 00:47:40 +0000 (0:00:00.848) 0:02:54.342 ********** 2026-03-08 00:55:52.897970 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.897976 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.897982 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.897988 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.897995 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.898002 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.898008 | orchestrator | 2026-03-08 00:55:52.898051 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-08 00:55:52.898057 | orchestrator | Sunday 08 March 2026 00:47:41 +0000 (0:00:00.728) 0:02:55.070 ********** 2026-03-08 00:55:52.898063 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.898070 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.898076 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.898082 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:55:52.898134 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:55:52.898141 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:55:52.898147 | orchestrator | 2026-03-08 00:55:52.898153 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-08 00:55:52.898160 | orchestrator | Sunday 08 March 2026 00:47:45 +0000 (0:00:03.928) 0:02:58.999 ********** 2026-03-08 00:55:52.898166 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:55:52.898173 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:55:52.898197 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:55:52.898203 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.898209 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.898215 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.898220 | orchestrator | 2026-03-08 00:55:52.898226 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-08 00:55:52.898233 | orchestrator | Sunday 08 March 2026 00:47:46 +0000 (0:00:01.129) 0:03:00.128 ********** 2026-03-08 00:55:52.898239 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:55:52.898245 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:55:52.898327 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:55:52.898352 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.898363 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.898374 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.898383 | orchestrator | 2026-03-08 00:55:52.898393 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-08 00:55:52.898404 | orchestrator | Sunday 08 March 2026 00:47:47 +0000 (0:00:00.947) 0:03:01.075 ********** 2026-03-08 00:55:52.898410 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.898416 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.898423 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.898439 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.898446 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.898452 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.898458 | orchestrator | 2026-03-08 00:55:52.898465 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-08 00:55:52.898470 | orchestrator | Sunday 08 March 2026 00:47:48 +0000 (0:00:01.056) 0:03:02.132 ********** 2026-03-08 00:55:52.898477 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-08 00:55:52.898484 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-08 00:55:52.898490 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-08 00:55:52.898497 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.898557 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.898567 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.898637 | orchestrator | 2026-03-08 00:55:52.898645 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-08 00:55:52.898651 | orchestrator | Sunday 08 March 2026 00:47:49 +0000 (0:00:00.638) 0:03:02.770 ********** 2026-03-08 00:55:52.898659 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-03-08 00:55:52.898669 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-03-08 00:55:52.898677 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.898683 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-03-08 00:55:52.898691 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-03-08 00:55:52.898705 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.898713 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-03-08 00:55:52.898720 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-03-08 00:55:52.898727 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.898734 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.898741 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.898748 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.898755 | orchestrator | 2026-03-08 00:55:52.898761 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-08 00:55:52.898767 | orchestrator | Sunday 08 March 2026 00:47:49 +0000 (0:00:00.907) 0:03:03.677 ********** 2026-03-08 00:55:52.898781 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.898787 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.898793 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.898799 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.898805 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.898811 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.898817 | orchestrator | 2026-03-08 00:55:52.898919 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-08 00:55:52.898933 | orchestrator | Sunday 08 March 2026 00:47:50 +0000 (0:00:00.749) 0:03:04.426 ********** 2026-03-08 00:55:52.898940 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.898947 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.898953 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.898960 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.898967 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.898974 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.898981 | orchestrator | 2026-03-08 00:55:52.898988 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-08 00:55:52.898994 | orchestrator | Sunday 08 March 2026 00:47:51 +0000 (0:00:00.776) 0:03:05.203 ********** 2026-03-08 00:55:52.899000 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.899005 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.899011 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.899016 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.899022 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.899028 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.899034 | orchestrator | 2026-03-08 00:55:52.899039 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-08 00:55:52.899046 | orchestrator | Sunday 08 March 2026 00:47:52 +0000 (0:00:00.570) 0:03:05.774 ********** 2026-03-08 00:55:52.899053 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.899059 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.899065 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.899072 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.899079 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.899085 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.899093 | orchestrator | 2026-03-08 00:55:52.899100 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-08 00:55:52.899155 | orchestrator | Sunday 08 March 2026 00:47:52 +0000 (0:00:00.712) 0:03:06.486 ********** 2026-03-08 00:55:52.899163 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.899171 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.899177 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.899184 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.899190 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.899197 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.899204 | orchestrator | 2026-03-08 00:55:52.899211 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-08 00:55:52.899218 | orchestrator | Sunday 08 March 2026 00:47:53 +0000 (0:00:00.786) 0:03:07.272 ********** 2026-03-08 00:55:52.899224 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:55:52.899232 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:55:52.899239 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:55:52.899245 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.899253 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.899260 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.899267 | orchestrator | 2026-03-08 00:55:52.899273 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-08 00:55:52.899280 | orchestrator | Sunday 08 March 2026 00:47:54 +0000 (0:00:01.083) 0:03:08.356 ********** 2026-03-08 00:55:52.899288 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-08 00:55:52.899305 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-08 00:55:52.899312 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-08 00:55:52.899319 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.899325 | orchestrator | 2026-03-08 00:55:52.899332 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-08 00:55:52.899339 | orchestrator | Sunday 08 March 2026 00:47:54 +0000 (0:00:00.365) 0:03:08.721 ********** 2026-03-08 00:55:52.899346 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-08 00:55:52.899353 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-08 00:55:52.899360 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-08 00:55:52.899367 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.899374 | orchestrator | 2026-03-08 00:55:52.899381 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-08 00:55:52.899395 | orchestrator | Sunday 08 March 2026 00:47:55 +0000 (0:00:00.478) 0:03:09.200 ********** 2026-03-08 00:55:52.899403 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-08 00:55:52.899409 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-08 00:55:52.899417 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-08 00:55:52.899424 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.899430 | orchestrator | 2026-03-08 00:55:52.899437 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-08 00:55:52.899443 | orchestrator | Sunday 08 March 2026 00:47:55 +0000 (0:00:00.389) 0:03:09.589 ********** 2026-03-08 00:55:52.899449 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:55:52.899457 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:55:52.899464 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:55:52.899470 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.899475 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.899481 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.899487 | orchestrator | 2026-03-08 00:55:52.899493 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-08 00:55:52.899498 | orchestrator | Sunday 08 March 2026 00:47:56 +0000 (0:00:00.657) 0:03:10.247 ********** 2026-03-08 00:55:52.899504 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-08 00:55:52.899510 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-08 00:55:52.899516 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-08 00:55:52.899522 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-03-08 00:55:52.899529 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.899535 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-03-08 00:55:52.899541 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-03-08 00:55:52.899547 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.899553 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.899561 | orchestrator | 2026-03-08 00:55:52.899568 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-08 00:55:52.899576 | orchestrator | Sunday 08 March 2026 00:47:59 +0000 (0:00:02.833) 0:03:13.081 ********** 2026-03-08 00:55:52.899583 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:55:52.899589 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:55:52.899596 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:55:52.899601 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:55:52.899607 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:55:52.899614 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:55:52.899620 | orchestrator | 2026-03-08 00:55:52.899627 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-08 00:55:52.899634 | orchestrator | Sunday 08 March 2026 00:48:02 +0000 (0:00:03.574) 0:03:16.655 ********** 2026-03-08 00:55:52.899641 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:55:52.899647 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:55:52.899653 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:55:52.899666 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:55:52.899673 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:55:52.899678 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:55:52.899684 | orchestrator | 2026-03-08 00:55:52.899690 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-08 00:55:52.899696 | orchestrator | Sunday 08 March 2026 00:48:03 +0000 (0:00:01.010) 0:03:17.665 ********** 2026-03-08 00:55:52.899702 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.899708 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.899715 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.899721 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:55:52.899728 | orchestrator | 2026-03-08 00:55:52.899735 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-03-08 00:55:52.899773 | orchestrator | Sunday 08 March 2026 00:48:05 +0000 (0:00:01.261) 0:03:18.927 ********** 2026-03-08 00:55:52.899780 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:55:52.899786 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:55:52.899791 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:55:52.899797 | orchestrator | 2026-03-08 00:55:52.899802 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-03-08 00:55:52.899808 | orchestrator | Sunday 08 March 2026 00:48:05 +0000 (0:00:00.392) 0:03:19.320 ********** 2026-03-08 00:55:52.899814 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:55:52.899820 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:55:52.899826 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:55:52.899832 | orchestrator | 2026-03-08 00:55:52.899837 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-03-08 00:55:52.899843 | orchestrator | Sunday 08 March 2026 00:48:06 +0000 (0:00:01.390) 0:03:20.711 ********** 2026-03-08 00:55:52.899849 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-08 00:55:52.899854 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-08 00:55:52.899877 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-08 00:55:52.899883 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.899889 | orchestrator | 2026-03-08 00:55:52.899895 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-03-08 00:55:52.899900 | orchestrator | Sunday 08 March 2026 00:48:08 +0000 (0:00:01.138) 0:03:21.849 ********** 2026-03-08 00:55:52.899905 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:55:52.899911 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:55:52.899917 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:55:52.899922 | orchestrator | 2026-03-08 00:55:52.899928 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-08 00:55:52.899934 | orchestrator | Sunday 08 March 2026 00:48:08 +0000 (0:00:00.340) 0:03:22.190 ********** 2026-03-08 00:55:52.899940 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.899946 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.899952 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.899958 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 00:55:52.899965 | orchestrator | 2026-03-08 00:55:52.899972 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-03-08 00:55:52.899986 | orchestrator | Sunday 08 March 2026 00:48:09 +0000 (0:00:01.174) 0:03:23.364 ********** 2026-03-08 00:55:52.899993 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-08 00:55:52.899999 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-08 00:55:52.900006 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-08 00:55:52.900012 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.900018 | orchestrator | 2026-03-08 00:55:52.900024 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-03-08 00:55:52.900029 | orchestrator | Sunday 08 March 2026 00:48:10 +0000 (0:00:00.410) 0:03:23.775 ********** 2026-03-08 00:55:52.900044 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.900050 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.900056 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.900061 | orchestrator | 2026-03-08 00:55:52.900067 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-03-08 00:55:52.900073 | orchestrator | Sunday 08 March 2026 00:48:10 +0000 (0:00:00.394) 0:03:24.169 ********** 2026-03-08 00:55:52.900080 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.900092 | orchestrator | 2026-03-08 00:55:52.900097 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-03-08 00:55:52.900103 | orchestrator | Sunday 08 March 2026 00:48:10 +0000 (0:00:00.304) 0:03:24.474 ********** 2026-03-08 00:55:52.900108 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.900114 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.900120 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.900126 | orchestrator | 2026-03-08 00:55:52.900132 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-03-08 00:55:52.900138 | orchestrator | Sunday 08 March 2026 00:48:11 +0000 (0:00:00.362) 0:03:24.836 ********** 2026-03-08 00:55:52.900144 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.900153 | orchestrator | 2026-03-08 00:55:52.900161 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-03-08 00:55:52.900174 | orchestrator | Sunday 08 March 2026 00:48:11 +0000 (0:00:00.224) 0:03:25.061 ********** 2026-03-08 00:55:52.900180 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.900192 | orchestrator | 2026-03-08 00:55:52.900201 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-03-08 00:55:52.900213 | orchestrator | Sunday 08 March 2026 00:48:11 +0000 (0:00:00.220) 0:03:25.281 ********** 2026-03-08 00:55:52.900221 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.900231 | orchestrator | 2026-03-08 00:55:52.900242 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-03-08 00:55:52.900250 | orchestrator | Sunday 08 March 2026 00:48:11 +0000 (0:00:00.120) 0:03:25.401 ********** 2026-03-08 00:55:52.900256 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.900263 | orchestrator | 2026-03-08 00:55:52.900268 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-03-08 00:55:52.900275 | orchestrator | Sunday 08 March 2026 00:48:12 +0000 (0:00:00.756) 0:03:26.157 ********** 2026-03-08 00:55:52.900281 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.900286 | orchestrator | 2026-03-08 00:55:52.900292 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-03-08 00:55:52.900298 | orchestrator | Sunday 08 March 2026 00:48:12 +0000 (0:00:00.238) 0:03:26.396 ********** 2026-03-08 00:55:52.900304 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-08 00:55:52.900310 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-08 00:55:52.900316 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-08 00:55:52.900322 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.900328 | orchestrator | 2026-03-08 00:55:52.900334 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-03-08 00:55:52.900384 | orchestrator | Sunday 08 March 2026 00:48:13 +0000 (0:00:00.418) 0:03:26.814 ********** 2026-03-08 00:55:52.900393 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.900399 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.900405 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.900411 | orchestrator | 2026-03-08 00:55:52.900418 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-03-08 00:55:52.900424 | orchestrator | Sunday 08 March 2026 00:48:13 +0000 (0:00:00.375) 0:03:27.190 ********** 2026-03-08 00:55:52.900430 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.900437 | orchestrator | 2026-03-08 00:55:52.900443 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-03-08 00:55:52.900458 | orchestrator | Sunday 08 March 2026 00:48:13 +0000 (0:00:00.240) 0:03:27.430 ********** 2026-03-08 00:55:52.900463 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.900469 | orchestrator | 2026-03-08 00:55:52.900475 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-08 00:55:52.900480 | orchestrator | Sunday 08 March 2026 00:48:13 +0000 (0:00:00.251) 0:03:27.682 ********** 2026-03-08 00:55:52.900486 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.900491 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.900497 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.900503 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 00:55:52.900509 | orchestrator | 2026-03-08 00:55:52.900515 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-03-08 00:55:52.900521 | orchestrator | Sunday 08 March 2026 00:48:15 +0000 (0:00:01.164) 0:03:28.847 ********** 2026-03-08 00:55:52.900527 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:55:52.900533 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:55:52.900537 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:55:52.900541 | orchestrator | 2026-03-08 00:55:52.900545 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-03-08 00:55:52.900549 | orchestrator | Sunday 08 March 2026 00:48:15 +0000 (0:00:00.402) 0:03:29.249 ********** 2026-03-08 00:55:52.900552 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:55:52.900556 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:55:52.900560 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:55:52.900564 | orchestrator | 2026-03-08 00:55:52.900574 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-03-08 00:55:52.900580 | orchestrator | Sunday 08 March 2026 00:48:16 +0000 (0:00:01.153) 0:03:30.403 ********** 2026-03-08 00:55:52.900586 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-08 00:55:52.900591 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-08 00:55:52.900597 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-08 00:55:52.900603 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.900609 | orchestrator | 2026-03-08 00:55:52.900615 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-03-08 00:55:52.900621 | orchestrator | Sunday 08 March 2026 00:48:17 +0000 (0:00:00.912) 0:03:31.316 ********** 2026-03-08 00:55:52.900628 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:55:52.900634 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:55:52.900640 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:55:52.900646 | orchestrator | 2026-03-08 00:55:52.900652 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-08 00:55:52.900658 | orchestrator | Sunday 08 March 2026 00:48:18 +0000 (0:00:00.631) 0:03:31.947 ********** 2026-03-08 00:55:52.900664 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.900670 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.900676 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.900682 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 00:55:52.900689 | orchestrator | 2026-03-08 00:55:52.900695 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-03-08 00:55:52.900701 | orchestrator | Sunday 08 March 2026 00:48:19 +0000 (0:00:00.878) 0:03:32.826 ********** 2026-03-08 00:55:52.900708 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:55:52.900714 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:55:52.900720 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:55:52.900727 | orchestrator | 2026-03-08 00:55:52.900733 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-03-08 00:55:52.900738 | orchestrator | Sunday 08 March 2026 00:48:19 +0000 (0:00:00.562) 0:03:33.388 ********** 2026-03-08 00:55:52.900745 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:55:52.900751 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:55:52.900763 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:55:52.900769 | orchestrator | 2026-03-08 00:55:52.900775 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-03-08 00:55:52.900781 | orchestrator | Sunday 08 March 2026 00:48:20 +0000 (0:00:01.040) 0:03:34.428 ********** 2026-03-08 00:55:52.900787 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-08 00:55:52.900794 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-08 00:55:52.900799 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-08 00:55:52.900805 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.900811 | orchestrator | 2026-03-08 00:55:52.900817 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-03-08 00:55:52.900824 | orchestrator | Sunday 08 March 2026 00:48:21 +0000 (0:00:00.699) 0:03:35.128 ********** 2026-03-08 00:55:52.900830 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:55:52.900836 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:55:52.900841 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:55:52.900847 | orchestrator | 2026-03-08 00:55:52.900852 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-03-08 00:55:52.900986 | orchestrator | Sunday 08 March 2026 00:48:21 +0000 (0:00:00.308) 0:03:35.436 ********** 2026-03-08 00:55:52.900992 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.900998 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.901004 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.901010 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.901016 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.901062 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.901071 | orchestrator | 2026-03-08 00:55:52.901077 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-08 00:55:52.901083 | orchestrator | Sunday 08 March 2026 00:48:22 +0000 (0:00:00.985) 0:03:36.422 ********** 2026-03-08 00:55:52.901089 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.901094 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.901100 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.901106 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:55:52.901113 | orchestrator | 2026-03-08 00:55:52.901120 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-03-08 00:55:52.901126 | orchestrator | Sunday 08 March 2026 00:48:23 +0000 (0:00:00.885) 0:03:37.308 ********** 2026-03-08 00:55:52.901132 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:55:52.901142 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:55:52.901149 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:55:52.901157 | orchestrator | 2026-03-08 00:55:52.901168 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-03-08 00:55:52.901176 | orchestrator | Sunday 08 March 2026 00:48:24 +0000 (0:00:00.587) 0:03:37.896 ********** 2026-03-08 00:55:52.901185 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:55:52.901193 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:55:52.901201 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:55:52.901207 | orchestrator | 2026-03-08 00:55:52.901213 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-03-08 00:55:52.901218 | orchestrator | Sunday 08 March 2026 00:48:25 +0000 (0:00:01.386) 0:03:39.283 ********** 2026-03-08 00:55:52.901224 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-08 00:55:52.901230 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-08 00:55:52.901236 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-08 00:55:52.901242 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.901248 | orchestrator | 2026-03-08 00:55:52.901254 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-03-08 00:55:52.901260 | orchestrator | Sunday 08 March 2026 00:48:26 +0000 (0:00:00.655) 0:03:39.938 ********** 2026-03-08 00:55:52.901275 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:55:52.901289 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:55:52.901296 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:55:52.901302 | orchestrator | 2026-03-08 00:55:52.901309 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-03-08 00:55:52.901315 | orchestrator | 2026-03-08 00:55:52.901397 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-08 00:55:52.901408 | orchestrator | Sunday 08 March 2026 00:48:26 +0000 (0:00:00.578) 0:03:40.517 ********** 2026-03-08 00:55:52.901415 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:55:52.901422 | orchestrator | 2026-03-08 00:55:52.901427 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-08 00:55:52.901433 | orchestrator | Sunday 08 March 2026 00:48:27 +0000 (0:00:00.760) 0:03:41.277 ********** 2026-03-08 00:55:52.901439 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:55:52.901445 | orchestrator | 2026-03-08 00:55:52.901451 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-08 00:55:52.901457 | orchestrator | Sunday 08 March 2026 00:48:28 +0000 (0:00:00.589) 0:03:41.867 ********** 2026-03-08 00:55:52.901463 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:55:52.901469 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:55:52.901475 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:55:52.901481 | orchestrator | 2026-03-08 00:55:52.901487 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-08 00:55:52.901492 | orchestrator | Sunday 08 March 2026 00:48:29 +0000 (0:00:00.970) 0:03:42.837 ********** 2026-03-08 00:55:52.901498 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.901504 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.901510 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.901517 | orchestrator | 2026-03-08 00:55:52.901524 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-08 00:55:52.901530 | orchestrator | Sunday 08 March 2026 00:48:29 +0000 (0:00:00.332) 0:03:43.170 ********** 2026-03-08 00:55:52.901536 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.901543 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.901550 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.901557 | orchestrator | 2026-03-08 00:55:52.901563 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-08 00:55:52.901570 | orchestrator | Sunday 08 March 2026 00:48:29 +0000 (0:00:00.333) 0:03:43.503 ********** 2026-03-08 00:55:52.901576 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.901583 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.901590 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.901597 | orchestrator | 2026-03-08 00:55:52.901604 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-08 00:55:52.901610 | orchestrator | Sunday 08 March 2026 00:48:30 +0000 (0:00:00.353) 0:03:43.857 ********** 2026-03-08 00:55:52.901616 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:55:52.901621 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:55:52.901627 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:55:52.901633 | orchestrator | 2026-03-08 00:55:52.901639 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-08 00:55:52.901644 | orchestrator | Sunday 08 March 2026 00:48:31 +0000 (0:00:00.957) 0:03:44.814 ********** 2026-03-08 00:55:52.901650 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.901655 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.901660 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.901666 | orchestrator | 2026-03-08 00:55:52.901672 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-08 00:55:52.901678 | orchestrator | Sunday 08 March 2026 00:48:31 +0000 (0:00:00.362) 0:03:45.176 ********** 2026-03-08 00:55:52.901747 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.901758 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.901765 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.901771 | orchestrator | 2026-03-08 00:55:52.901776 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-08 00:55:52.901783 | orchestrator | Sunday 08 March 2026 00:48:31 +0000 (0:00:00.323) 0:03:45.500 ********** 2026-03-08 00:55:52.901789 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:55:52.901794 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:55:52.901800 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:55:52.901806 | orchestrator | 2026-03-08 00:55:52.901813 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-08 00:55:52.901819 | orchestrator | Sunday 08 March 2026 00:48:32 +0000 (0:00:00.689) 0:03:46.189 ********** 2026-03-08 00:55:52.901825 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:55:52.901831 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:55:52.901836 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:55:52.901842 | orchestrator | 2026-03-08 00:55:52.901849 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-08 00:55:52.901856 | orchestrator | Sunday 08 March 2026 00:48:33 +0000 (0:00:00.710) 0:03:46.900 ********** 2026-03-08 00:55:52.901911 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.901920 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.901926 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.901933 | orchestrator | 2026-03-08 00:55:52.901940 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-08 00:55:52.901947 | orchestrator | Sunday 08 March 2026 00:48:33 +0000 (0:00:00.673) 0:03:47.573 ********** 2026-03-08 00:55:52.901954 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:55:52.901961 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:55:52.901968 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:55:52.901975 | orchestrator | 2026-03-08 00:55:52.901982 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-08 00:55:52.901988 | orchestrator | Sunday 08 March 2026 00:48:34 +0000 (0:00:00.411) 0:03:47.985 ********** 2026-03-08 00:55:52.901995 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.902002 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.902009 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.902204 | orchestrator | 2026-03-08 00:55:52.902214 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-08 00:55:52.902227 | orchestrator | Sunday 08 March 2026 00:48:34 +0000 (0:00:00.354) 0:03:48.340 ********** 2026-03-08 00:55:52.902233 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.902238 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.902244 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.902250 | orchestrator | 2026-03-08 00:55:52.902255 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-08 00:55:52.902261 | orchestrator | Sunday 08 March 2026 00:48:35 +0000 (0:00:00.416) 0:03:48.757 ********** 2026-03-08 00:55:52.902267 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.902273 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.902278 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.902284 | orchestrator | 2026-03-08 00:55:52.902290 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-08 00:55:52.902296 | orchestrator | Sunday 08 March 2026 00:48:35 +0000 (0:00:00.742) 0:03:49.499 ********** 2026-03-08 00:55:52.902303 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.902307 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.902311 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.902314 | orchestrator | 2026-03-08 00:55:52.902318 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-08 00:55:52.902322 | orchestrator | Sunday 08 March 2026 00:48:36 +0000 (0:00:00.321) 0:03:49.820 ********** 2026-03-08 00:55:52.902326 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.902329 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.902341 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.902345 | orchestrator | 2026-03-08 00:55:52.902349 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-08 00:55:52.902353 | orchestrator | Sunday 08 March 2026 00:48:36 +0000 (0:00:00.365) 0:03:50.186 ********** 2026-03-08 00:55:52.902356 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:55:52.902360 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:55:52.902364 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:55:52.902368 | orchestrator | 2026-03-08 00:55:52.902371 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-08 00:55:52.902375 | orchestrator | Sunday 08 March 2026 00:48:36 +0000 (0:00:00.355) 0:03:50.542 ********** 2026-03-08 00:55:52.902379 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:55:52.902383 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:55:52.902386 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:55:52.902390 | orchestrator | 2026-03-08 00:55:52.902394 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-08 00:55:52.902398 | orchestrator | Sunday 08 March 2026 00:48:37 +0000 (0:00:00.629) 0:03:51.171 ********** 2026-03-08 00:55:52.902401 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:55:52.902405 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:55:52.902409 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:55:52.902412 | orchestrator | 2026-03-08 00:55:52.902416 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-03-08 00:55:52.902420 | orchestrator | Sunday 08 March 2026 00:48:38 +0000 (0:00:00.705) 0:03:51.876 ********** 2026-03-08 00:55:52.902424 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:55:52.902427 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:55:52.902431 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:55:52.902435 | orchestrator | 2026-03-08 00:55:52.902439 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-03-08 00:55:52.902442 | orchestrator | Sunday 08 March 2026 00:48:38 +0000 (0:00:00.417) 0:03:52.294 ********** 2026-03-08 00:55:52.902446 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:55:52.902450 | orchestrator | 2026-03-08 00:55:52.902454 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-03-08 00:55:52.902458 | orchestrator | Sunday 08 March 2026 00:48:39 +0000 (0:00:00.921) 0:03:53.215 ********** 2026-03-08 00:55:52.902462 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.902465 | orchestrator | 2026-03-08 00:55:52.902509 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-03-08 00:55:52.902514 | orchestrator | Sunday 08 March 2026 00:48:39 +0000 (0:00:00.169) 0:03:53.385 ********** 2026-03-08 00:55:52.902518 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-08 00:55:52.902522 | orchestrator | 2026-03-08 00:55:52.902526 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-03-08 00:55:52.902530 | orchestrator | Sunday 08 March 2026 00:48:41 +0000 (0:00:01.383) 0:03:54.768 ********** 2026-03-08 00:55:52.902533 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:55:52.902537 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:55:52.902541 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:55:52.902545 | orchestrator | 2026-03-08 00:55:52.902549 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-03-08 00:55:52.902552 | orchestrator | Sunday 08 March 2026 00:48:41 +0000 (0:00:00.398) 0:03:55.166 ********** 2026-03-08 00:55:52.902556 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:55:52.902560 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:55:52.902564 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:55:52.902567 | orchestrator | 2026-03-08 00:55:52.902571 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-03-08 00:55:52.902575 | orchestrator | Sunday 08 March 2026 00:48:41 +0000 (0:00:00.445) 0:03:55.612 ********** 2026-03-08 00:55:52.902579 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:55:52.902583 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:55:52.902590 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:55:52.902594 | orchestrator | 2026-03-08 00:55:52.902598 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-03-08 00:55:52.902602 | orchestrator | Sunday 08 March 2026 00:48:43 +0000 (0:00:01.403) 0:03:57.016 ********** 2026-03-08 00:55:52.902605 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:55:52.902609 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:55:52.902613 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:55:52.902617 | orchestrator | 2026-03-08 00:55:52.902620 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-03-08 00:55:52.902624 | orchestrator | Sunday 08 March 2026 00:48:44 +0000 (0:00:00.729) 0:03:57.746 ********** 2026-03-08 00:55:52.902628 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:55:52.902632 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:55:52.902635 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:55:52.902639 | orchestrator | 2026-03-08 00:55:52.902647 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-03-08 00:55:52.902651 | orchestrator | Sunday 08 March 2026 00:48:44 +0000 (0:00:00.726) 0:03:58.472 ********** 2026-03-08 00:55:52.902655 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:55:52.902658 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:55:52.902662 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:55:52.902666 | orchestrator | 2026-03-08 00:55:52.902670 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-03-08 00:55:52.902673 | orchestrator | Sunday 08 March 2026 00:48:45 +0000 (0:00:00.647) 0:03:59.120 ********** 2026-03-08 00:55:52.902677 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:55:52.902681 | orchestrator | 2026-03-08 00:55:52.902685 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-03-08 00:55:52.902689 | orchestrator | Sunday 08 March 2026 00:48:46 +0000 (0:00:01.387) 0:04:00.508 ********** 2026-03-08 00:55:52.902692 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:55:52.902696 | orchestrator | 2026-03-08 00:55:52.902700 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-03-08 00:55:52.902704 | orchestrator | Sunday 08 March 2026 00:48:48 +0000 (0:00:01.347) 0:04:01.855 ********** 2026-03-08 00:55:52.902707 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-08 00:55:52.902711 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-08 00:55:52.902715 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-08 00:55:52.902721 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-08 00:55:52.902727 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-08 00:55:52.902733 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-03-08 00:55:52.902739 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-03-08 00:55:52.902750 | orchestrator | changed: [testbed-node-2 -> {{ item }}] 2026-03-08 00:55:52.902757 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-08 00:55:52.902762 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-03-08 00:55:52.902768 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-08 00:55:52.902774 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-03-08 00:55:52.902781 | orchestrator | 2026-03-08 00:55:52.902787 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-03-08 00:55:52.902793 | orchestrator | Sunday 08 March 2026 00:48:51 +0000 (0:00:03.065) 0:04:04.921 ********** 2026-03-08 00:55:52.902798 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:55:52.902804 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:55:52.902810 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:55:52.902816 | orchestrator | 2026-03-08 00:55:52.902822 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-03-08 00:55:52.902828 | orchestrator | Sunday 08 March 2026 00:48:52 +0000 (0:00:01.080) 0:04:06.002 ********** 2026-03-08 00:55:52.902840 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:55:52.902846 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:55:52.902852 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:55:52.902858 | orchestrator | 2026-03-08 00:55:52.902885 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-03-08 00:55:52.902890 | orchestrator | Sunday 08 March 2026 00:48:52 +0000 (0:00:00.412) 0:04:06.414 ********** 2026-03-08 00:55:52.902893 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:55:52.902897 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:55:52.902901 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:55:52.902906 | orchestrator | 2026-03-08 00:55:52.902912 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-03-08 00:55:52.902917 | orchestrator | Sunday 08 March 2026 00:48:53 +0000 (0:00:00.324) 0:04:06.739 ********** 2026-03-08 00:55:52.902951 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:55:52.902957 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:55:52.902961 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:55:52.902965 | orchestrator | 2026-03-08 00:55:52.902969 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-03-08 00:55:52.902972 | orchestrator | Sunday 08 March 2026 00:48:55 +0000 (0:00:02.332) 0:04:09.072 ********** 2026-03-08 00:55:52.902976 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:55:52.902980 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:55:52.902983 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:55:52.902987 | orchestrator | 2026-03-08 00:55:52.902991 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-03-08 00:55:52.902994 | orchestrator | Sunday 08 March 2026 00:48:57 +0000 (0:00:01.730) 0:04:10.803 ********** 2026-03-08 00:55:52.902998 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.903002 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.903006 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.903009 | orchestrator | 2026-03-08 00:55:52.903013 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-03-08 00:55:52.903017 | orchestrator | Sunday 08 March 2026 00:48:57 +0000 (0:00:00.377) 0:04:11.180 ********** 2026-03-08 00:55:52.903020 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:55:52.903024 | orchestrator | 2026-03-08 00:55:52.903028 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-03-08 00:55:52.903032 | orchestrator | Sunday 08 March 2026 00:48:59 +0000 (0:00:01.668) 0:04:12.849 ********** 2026-03-08 00:55:52.903036 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.903039 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.903043 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.903047 | orchestrator | 2026-03-08 00:55:52.903050 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-03-08 00:55:52.903054 | orchestrator | Sunday 08 March 2026 00:48:59 +0000 (0:00:00.827) 0:04:13.677 ********** 2026-03-08 00:55:52.903058 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.903062 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.903065 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.903069 | orchestrator | 2026-03-08 00:55:52.903073 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-03-08 00:55:52.903081 | orchestrator | Sunday 08 March 2026 00:49:00 +0000 (0:00:01.011) 0:04:14.689 ********** 2026-03-08 00:55:52.903085 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:55:52.903089 | orchestrator | 2026-03-08 00:55:52.903092 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-03-08 00:55:52.903096 | orchestrator | Sunday 08 March 2026 00:49:01 +0000 (0:00:00.937) 0:04:15.627 ********** 2026-03-08 00:55:52.903100 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:55:52.903103 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:55:52.903111 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:55:52.903114 | orchestrator | 2026-03-08 00:55:52.903118 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-03-08 00:55:52.903122 | orchestrator | Sunday 08 March 2026 00:49:04 +0000 (0:00:02.524) 0:04:18.151 ********** 2026-03-08 00:55:52.903125 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:55:52.903129 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:55:52.903133 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:55:52.903136 | orchestrator | 2026-03-08 00:55:52.903140 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-03-08 00:55:52.903144 | orchestrator | Sunday 08 March 2026 00:49:05 +0000 (0:00:01.492) 0:04:19.643 ********** 2026-03-08 00:55:52.903147 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:55:52.903151 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:55:52.903155 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:55:52.903159 | orchestrator | 2026-03-08 00:55:52.903162 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-03-08 00:55:52.903166 | orchestrator | Sunday 08 March 2026 00:49:07 +0000 (0:00:01.917) 0:04:21.560 ********** 2026-03-08 00:55:52.903170 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:55:52.903174 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:55:52.903177 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:55:52.903181 | orchestrator | 2026-03-08 00:55:52.903185 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-03-08 00:55:52.903188 | orchestrator | Sunday 08 March 2026 00:49:10 +0000 (0:00:02.250) 0:04:23.811 ********** 2026-03-08 00:55:52.903192 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:55:52.903196 | orchestrator | 2026-03-08 00:55:52.903199 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-03-08 00:55:52.903203 | orchestrator | Sunday 08 March 2026 00:49:10 +0000 (0:00:00.625) 0:04:24.436 ********** 2026-03-08 00:55:52.903207 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:55:52.903210 | orchestrator | 2026-03-08 00:55:52.903214 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-03-08 00:55:52.903218 | orchestrator | Sunday 08 March 2026 00:49:11 +0000 (0:00:01.183) 0:04:25.619 ********** 2026-03-08 00:55:52.903221 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:55:52.903227 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:55:52.903233 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:55:52.903238 | orchestrator | 2026-03-08 00:55:52.903247 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-03-08 00:55:52.903254 | orchestrator | Sunday 08 March 2026 00:49:22 +0000 (0:00:11.070) 0:04:36.689 ********** 2026-03-08 00:55:52.903261 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.903268 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.903273 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.903279 | orchestrator | 2026-03-08 00:55:52.903285 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-03-08 00:55:52.903291 | orchestrator | Sunday 08 March 2026 00:49:23 +0000 (0:00:00.709) 0:04:37.399 ********** 2026-03-08 00:55:52.903322 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__35ac284c84be3e6a4da86466777481b2f590a9c2'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-03-08 00:55:52.903332 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__35ac284c84be3e6a4da86466777481b2f590a9c2'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-03-08 00:55:52.903339 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__35ac284c84be3e6a4da86466777481b2f590a9c2'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-03-08 00:55:52.903349 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__35ac284c84be3e6a4da86466777481b2f590a9c2'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-03-08 00:55:52.903358 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__35ac284c84be3e6a4da86466777481b2f590a9c2'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-03-08 00:55:52.903363 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__35ac284c84be3e6a4da86466777481b2f590a9c2'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__35ac284c84be3e6a4da86466777481b2f590a9c2'}])  2026-03-08 00:55:52.903368 | orchestrator | 2026-03-08 00:55:52.903372 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-08 00:55:52.903376 | orchestrator | Sunday 08 March 2026 00:49:39 +0000 (0:00:15.949) 0:04:53.348 ********** 2026-03-08 00:55:52.903379 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.903383 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.903387 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.903391 | orchestrator | 2026-03-08 00:55:52.903394 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-08 00:55:52.903398 | orchestrator | Sunday 08 March 2026 00:49:39 +0000 (0:00:00.327) 0:04:53.676 ********** 2026-03-08 00:55:52.903402 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:55:52.903406 | orchestrator | 2026-03-08 00:55:52.903410 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-03-08 00:55:52.903413 | orchestrator | Sunday 08 March 2026 00:49:40 +0000 (0:00:00.860) 0:04:54.537 ********** 2026-03-08 00:55:52.903417 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:55:52.903421 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:55:52.903424 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:55:52.903428 | orchestrator | 2026-03-08 00:55:52.903432 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-03-08 00:55:52.903436 | orchestrator | Sunday 08 March 2026 00:49:41 +0000 (0:00:00.421) 0:04:54.958 ********** 2026-03-08 00:55:52.903439 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.903443 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.903447 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.903450 | orchestrator | 2026-03-08 00:55:52.903454 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-03-08 00:55:52.903458 | orchestrator | Sunday 08 March 2026 00:49:41 +0000 (0:00:00.352) 0:04:55.311 ********** 2026-03-08 00:55:52.903462 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-08 00:55:52.903465 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-08 00:55:52.903469 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-08 00:55:52.903473 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.903477 | orchestrator | 2026-03-08 00:55:52.903480 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-03-08 00:55:52.903488 | orchestrator | Sunday 08 March 2026 00:49:42 +0000 (0:00:00.936) 0:04:56.247 ********** 2026-03-08 00:55:52.903491 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:55:52.903495 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:55:52.903499 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:55:52.903503 | orchestrator | 2026-03-08 00:55:52.903506 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-03-08 00:55:52.903510 | orchestrator | 2026-03-08 00:55:52.903528 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-08 00:55:52.903532 | orchestrator | Sunday 08 March 2026 00:49:43 +0000 (0:00:00.875) 0:04:57.122 ********** 2026-03-08 00:55:52.903536 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:55:52.903541 | orchestrator | 2026-03-08 00:55:52.903545 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-08 00:55:52.903549 | orchestrator | Sunday 08 March 2026 00:49:43 +0000 (0:00:00.554) 0:04:57.677 ********** 2026-03-08 00:55:52.903552 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:55:52.903556 | orchestrator | 2026-03-08 00:55:52.903560 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-08 00:55:52.903564 | orchestrator | Sunday 08 March 2026 00:49:44 +0000 (0:00:00.731) 0:04:58.409 ********** 2026-03-08 00:55:52.903567 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:55:52.903571 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:55:52.903575 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:55:52.903579 | orchestrator | 2026-03-08 00:55:52.903583 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-08 00:55:52.903586 | orchestrator | Sunday 08 March 2026 00:49:45 +0000 (0:00:00.649) 0:04:59.058 ********** 2026-03-08 00:55:52.903590 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.903594 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.903597 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.903601 | orchestrator | 2026-03-08 00:55:52.903605 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-08 00:55:52.903609 | orchestrator | Sunday 08 March 2026 00:49:45 +0000 (0:00:00.294) 0:04:59.353 ********** 2026-03-08 00:55:52.903612 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.903616 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.903620 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.903623 | orchestrator | 2026-03-08 00:55:52.903627 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-08 00:55:52.903634 | orchestrator | Sunday 08 March 2026 00:49:46 +0000 (0:00:00.548) 0:04:59.901 ********** 2026-03-08 00:55:52.903638 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.903641 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.903645 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.903649 | orchestrator | 2026-03-08 00:55:52.903653 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-08 00:55:52.903656 | orchestrator | Sunday 08 March 2026 00:49:46 +0000 (0:00:00.294) 0:05:00.196 ********** 2026-03-08 00:55:52.903660 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:55:52.903664 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:55:52.903668 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:55:52.903671 | orchestrator | 2026-03-08 00:55:52.903675 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-08 00:55:52.903679 | orchestrator | Sunday 08 March 2026 00:49:47 +0000 (0:00:00.731) 0:05:00.927 ********** 2026-03-08 00:55:52.903683 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.903783 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.903790 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.903796 | orchestrator | 2026-03-08 00:55:52.903801 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-08 00:55:52.903811 | orchestrator | Sunday 08 March 2026 00:49:47 +0000 (0:00:00.309) 0:05:01.236 ********** 2026-03-08 00:55:52.903817 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.903822 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.903827 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.903833 | orchestrator | 2026-03-08 00:55:52.903838 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-08 00:55:52.903844 | orchestrator | Sunday 08 March 2026 00:49:47 +0000 (0:00:00.287) 0:05:01.523 ********** 2026-03-08 00:55:52.903849 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:55:52.903855 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:55:52.903885 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:55:52.903891 | orchestrator | 2026-03-08 00:55:52.903926 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-08 00:55:52.903932 | orchestrator | Sunday 08 March 2026 00:49:48 +0000 (0:00:01.010) 0:05:02.534 ********** 2026-03-08 00:55:52.903938 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:55:52.903944 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:55:52.903949 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:55:52.903955 | orchestrator | 2026-03-08 00:55:52.903961 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-08 00:55:52.903967 | orchestrator | Sunday 08 March 2026 00:49:49 +0000 (0:00:00.677) 0:05:03.212 ********** 2026-03-08 00:55:52.903973 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.903979 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.903985 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.903989 | orchestrator | 2026-03-08 00:55:52.903993 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-08 00:55:52.903997 | orchestrator | Sunday 08 March 2026 00:49:49 +0000 (0:00:00.297) 0:05:03.509 ********** 2026-03-08 00:55:52.904001 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:55:52.904004 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:55:52.904008 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:55:52.904012 | orchestrator | 2026-03-08 00:55:52.904016 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-08 00:55:52.904020 | orchestrator | Sunday 08 March 2026 00:49:50 +0000 (0:00:00.353) 0:05:03.863 ********** 2026-03-08 00:55:52.904023 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.904027 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.904031 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.904034 | orchestrator | 2026-03-08 00:55:52.904038 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-08 00:55:52.904042 | orchestrator | Sunday 08 March 2026 00:49:50 +0000 (0:00:00.641) 0:05:04.504 ********** 2026-03-08 00:55:52.904046 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.904050 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.904090 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.904097 | orchestrator | 2026-03-08 00:55:52.904101 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-08 00:55:52.904105 | orchestrator | Sunday 08 March 2026 00:49:51 +0000 (0:00:00.312) 0:05:04.816 ********** 2026-03-08 00:55:52.904111 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.904117 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.904123 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.904129 | orchestrator | 2026-03-08 00:55:52.904135 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-08 00:55:52.904141 | orchestrator | Sunday 08 March 2026 00:49:51 +0000 (0:00:00.334) 0:05:05.151 ********** 2026-03-08 00:55:52.904147 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.904153 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.904159 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.904163 | orchestrator | 2026-03-08 00:55:52.904167 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-08 00:55:52.904170 | orchestrator | Sunday 08 March 2026 00:49:51 +0000 (0:00:00.317) 0:05:05.469 ********** 2026-03-08 00:55:52.904182 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.904186 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.904190 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.904193 | orchestrator | 2026-03-08 00:55:52.904197 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-08 00:55:52.904201 | orchestrator | Sunday 08 March 2026 00:49:52 +0000 (0:00:00.564) 0:05:06.033 ********** 2026-03-08 00:55:52.904204 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:55:52.904208 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:55:52.904214 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:55:52.904219 | orchestrator | 2026-03-08 00:55:52.904225 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-08 00:55:52.904231 | orchestrator | Sunday 08 March 2026 00:49:52 +0000 (0:00:00.394) 0:05:06.428 ********** 2026-03-08 00:55:52.904239 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:55:52.904247 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:55:52.904254 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:55:52.904259 | orchestrator | 2026-03-08 00:55:52.904265 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-08 00:55:52.904272 | orchestrator | Sunday 08 March 2026 00:49:53 +0000 (0:00:00.383) 0:05:06.811 ********** 2026-03-08 00:55:52.904284 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:55:52.904291 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:55:52.904297 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:55:52.904304 | orchestrator | 2026-03-08 00:55:52.904310 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-03-08 00:55:52.904316 | orchestrator | Sunday 08 March 2026 00:49:53 +0000 (0:00:00.806) 0:05:07.617 ********** 2026-03-08 00:55:52.904322 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-08 00:55:52.904328 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-08 00:55:52.904334 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-08 00:55:52.904338 | orchestrator | 2026-03-08 00:55:52.904342 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-03-08 00:55:52.904346 | orchestrator | Sunday 08 March 2026 00:49:54 +0000 (0:00:00.662) 0:05:08.280 ********** 2026-03-08 00:55:52.904349 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:55:52.904354 | orchestrator | 2026-03-08 00:55:52.904358 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-03-08 00:55:52.904361 | orchestrator | Sunday 08 March 2026 00:49:55 +0000 (0:00:00.561) 0:05:08.841 ********** 2026-03-08 00:55:52.904365 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:55:52.904369 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:55:52.904372 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:55:52.904376 | orchestrator | 2026-03-08 00:55:52.904380 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-03-08 00:55:52.904384 | orchestrator | Sunday 08 March 2026 00:49:55 +0000 (0:00:00.743) 0:05:09.585 ********** 2026-03-08 00:55:52.904387 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.904391 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.904395 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.904398 | orchestrator | 2026-03-08 00:55:52.904402 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-03-08 00:55:52.904406 | orchestrator | Sunday 08 March 2026 00:49:56 +0000 (0:00:00.587) 0:05:10.172 ********** 2026-03-08 00:55:52.904410 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-08 00:55:52.904414 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-08 00:55:52.904418 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-08 00:55:52.904422 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-03-08 00:55:52.904425 | orchestrator | 2026-03-08 00:55:52.904429 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-03-08 00:55:52.904443 | orchestrator | Sunday 08 March 2026 00:50:07 +0000 (0:00:11.009) 0:05:21.182 ********** 2026-03-08 00:55:52.904446 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:55:52.904450 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:55:52.904454 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:55:52.904458 | orchestrator | 2026-03-08 00:55:52.904461 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-03-08 00:55:52.904465 | orchestrator | Sunday 08 March 2026 00:50:07 +0000 (0:00:00.384) 0:05:21.566 ********** 2026-03-08 00:55:52.904469 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-08 00:55:52.904473 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-08 00:55:52.904476 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-08 00:55:52.904481 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-08 00:55:52.904484 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-08 00:55:52.904488 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-08 00:55:52.904492 | orchestrator | 2026-03-08 00:55:52.904606 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-03-08 00:55:52.904612 | orchestrator | Sunday 08 March 2026 00:50:10 +0000 (0:00:02.430) 0:05:23.996 ********** 2026-03-08 00:55:52.904616 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-08 00:55:52.904620 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-08 00:55:52.904624 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-08 00:55:52.904627 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-08 00:55:52.904631 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-03-08 00:55:52.904635 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-03-08 00:55:52.904639 | orchestrator | 2026-03-08 00:55:52.904643 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-03-08 00:55:52.904646 | orchestrator | Sunday 08 March 2026 00:50:11 +0000 (0:00:01.418) 0:05:25.414 ********** 2026-03-08 00:55:52.904650 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:55:52.904654 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:55:52.904658 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:55:52.904661 | orchestrator | 2026-03-08 00:55:52.904665 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-03-08 00:55:52.904669 | orchestrator | Sunday 08 March 2026 00:50:12 +0000 (0:00:01.030) 0:05:26.445 ********** 2026-03-08 00:55:52.904673 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.904677 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.904680 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.904684 | orchestrator | 2026-03-08 00:55:52.904688 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-03-08 00:55:52.904692 | orchestrator | Sunday 08 March 2026 00:50:13 +0000 (0:00:00.306) 0:05:26.751 ********** 2026-03-08 00:55:52.904695 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.904699 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.904703 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.904706 | orchestrator | 2026-03-08 00:55:52.904710 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-03-08 00:55:52.904714 | orchestrator | Sunday 08 March 2026 00:50:13 +0000 (0:00:00.356) 0:05:27.108 ********** 2026-03-08 00:55:52.904718 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-2, testbed-node-1 2026-03-08 00:55:52.904722 | orchestrator | 2026-03-08 00:55:52.904729 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-03-08 00:55:52.904733 | orchestrator | Sunday 08 March 2026 00:50:14 +0000 (0:00:01.016) 0:05:28.125 ********** 2026-03-08 00:55:52.904737 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.904741 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.904744 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.904748 | orchestrator | 2026-03-08 00:55:52.904752 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-03-08 00:55:52.904759 | orchestrator | Sunday 08 March 2026 00:50:14 +0000 (0:00:00.366) 0:05:28.491 ********** 2026-03-08 00:55:52.904763 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.904767 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.904771 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.904774 | orchestrator | 2026-03-08 00:55:52.904778 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-03-08 00:55:52.904782 | orchestrator | Sunday 08 March 2026 00:50:15 +0000 (0:00:00.496) 0:05:28.987 ********** 2026-03-08 00:55:52.904785 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:55:52.904789 | orchestrator | 2026-03-08 00:55:52.904793 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-03-08 00:55:52.904797 | orchestrator | Sunday 08 March 2026 00:50:15 +0000 (0:00:00.582) 0:05:29.570 ********** 2026-03-08 00:55:52.904800 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:55:52.904804 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:55:52.904808 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:55:52.904859 | orchestrator | 2026-03-08 00:55:52.904881 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-03-08 00:55:52.904885 | orchestrator | Sunday 08 March 2026 00:50:17 +0000 (0:00:01.807) 0:05:31.377 ********** 2026-03-08 00:55:52.904889 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:55:52.904892 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:55:52.904896 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:55:52.904900 | orchestrator | 2026-03-08 00:55:52.904904 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-03-08 00:55:52.904908 | orchestrator | Sunday 08 March 2026 00:50:18 +0000 (0:00:01.193) 0:05:32.571 ********** 2026-03-08 00:55:52.904911 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:55:52.904915 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:55:52.904919 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:55:52.904923 | orchestrator | 2026-03-08 00:55:52.904926 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-03-08 00:55:52.904930 | orchestrator | Sunday 08 March 2026 00:50:20 +0000 (0:00:02.104) 0:05:34.676 ********** 2026-03-08 00:55:52.904934 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:55:52.904938 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:55:52.904941 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:55:52.904945 | orchestrator | 2026-03-08 00:55:52.904949 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-03-08 00:55:52.904953 | orchestrator | Sunday 08 March 2026 00:50:23 +0000 (0:00:02.246) 0:05:36.923 ********** 2026-03-08 00:55:52.904956 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.904960 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.904964 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-03-08 00:55:52.904968 | orchestrator | 2026-03-08 00:55:52.904971 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-03-08 00:55:52.904975 | orchestrator | Sunday 08 March 2026 00:50:23 +0000 (0:00:00.701) 0:05:37.624 ********** 2026-03-08 00:55:52.904979 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-03-08 00:55:52.905000 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-03-08 00:55:52.905005 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-03-08 00:55:52.905008 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-03-08 00:55:52.905012 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2026-03-08 00:55:52.905016 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (25 retries left). 2026-03-08 00:55:52.905024 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-08 00:55:52.905028 | orchestrator | 2026-03-08 00:55:52.905031 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-03-08 00:55:52.905037 | orchestrator | Sunday 08 March 2026 00:50:59 +0000 (0:00:36.102) 0:06:13.727 ********** 2026-03-08 00:55:52.905042 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-08 00:55:52.905048 | orchestrator | 2026-03-08 00:55:52.905055 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-03-08 00:55:52.905061 | orchestrator | Sunday 08 March 2026 00:51:01 +0000 (0:00:01.421) 0:06:15.148 ********** 2026-03-08 00:55:52.905066 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:55:52.905072 | orchestrator | 2026-03-08 00:55:52.905078 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-03-08 00:55:52.905084 | orchestrator | Sunday 08 March 2026 00:51:01 +0000 (0:00:00.326) 0:06:15.475 ********** 2026-03-08 00:55:52.905092 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:55:52.905096 | orchestrator | 2026-03-08 00:55:52.905100 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-03-08 00:55:52.905103 | orchestrator | Sunday 08 March 2026 00:51:01 +0000 (0:00:00.164) 0:06:15.640 ********** 2026-03-08 00:55:52.905107 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-03-08 00:55:52.905111 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-03-08 00:55:52.905118 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-03-08 00:55:52.905122 | orchestrator | 2026-03-08 00:55:52.905126 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-03-08 00:55:52.905129 | orchestrator | Sunday 08 March 2026 00:51:08 +0000 (0:00:06.447) 0:06:22.088 ********** 2026-03-08 00:55:52.905133 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-03-08 00:55:52.905137 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-03-08 00:55:52.905141 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-03-08 00:55:52.905145 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-03-08 00:55:52.905148 | orchestrator | 2026-03-08 00:55:52.905152 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-08 00:55:52.905156 | orchestrator | Sunday 08 March 2026 00:51:13 +0000 (0:00:05.103) 0:06:27.191 ********** 2026-03-08 00:55:52.905159 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:55:52.905163 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:55:52.905167 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:55:52.905171 | orchestrator | 2026-03-08 00:55:52.905174 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-08 00:55:52.905178 | orchestrator | Sunday 08 March 2026 00:51:14 +0000 (0:00:00.625) 0:06:27.816 ********** 2026-03-08 00:55:52.905182 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:55:52.905228 | orchestrator | 2026-03-08 00:55:52.905232 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-03-08 00:55:52.905236 | orchestrator | Sunday 08 March 2026 00:51:14 +0000 (0:00:00.553) 0:06:28.370 ********** 2026-03-08 00:55:52.905240 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:55:52.905244 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:55:52.905247 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:55:52.905251 | orchestrator | 2026-03-08 00:55:52.905255 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-03-08 00:55:52.905259 | orchestrator | Sunday 08 March 2026 00:51:15 +0000 (0:00:00.455) 0:06:28.825 ********** 2026-03-08 00:55:52.905262 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:55:52.905266 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:55:52.905270 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:55:52.905277 | orchestrator | 2026-03-08 00:55:52.905281 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-03-08 00:55:52.905285 | orchestrator | Sunday 08 March 2026 00:51:16 +0000 (0:00:01.351) 0:06:30.177 ********** 2026-03-08 00:55:52.905288 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-08 00:55:52.905292 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-08 00:55:52.905296 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-08 00:55:52.905300 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.905303 | orchestrator | 2026-03-08 00:55:52.905307 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-03-08 00:55:52.905311 | orchestrator | Sunday 08 March 2026 00:51:16 +0000 (0:00:00.560) 0:06:30.737 ********** 2026-03-08 00:55:52.905315 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:55:52.905318 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:55:52.905322 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:55:52.905326 | orchestrator | 2026-03-08 00:55:52.905330 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-03-08 00:55:52.905333 | orchestrator | 2026-03-08 00:55:52.905337 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-08 00:55:52.905341 | orchestrator | Sunday 08 March 2026 00:51:17 +0000 (0:00:00.459) 0:06:31.197 ********** 2026-03-08 00:55:52.905364 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 00:55:52.905369 | orchestrator | 2026-03-08 00:55:52.905373 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-08 00:55:52.905377 | orchestrator | Sunday 08 March 2026 00:51:18 +0000 (0:00:00.701) 0:06:31.898 ********** 2026-03-08 00:55:52.905381 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 00:55:52.905384 | orchestrator | 2026-03-08 00:55:52.905388 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-08 00:55:52.905392 | orchestrator | Sunday 08 March 2026 00:51:18 +0000 (0:00:00.449) 0:06:32.348 ********** 2026-03-08 00:55:52.905396 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.905399 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.905403 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.905407 | orchestrator | 2026-03-08 00:55:52.905410 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-08 00:55:52.905414 | orchestrator | Sunday 08 March 2026 00:51:19 +0000 (0:00:00.435) 0:06:32.783 ********** 2026-03-08 00:55:52.905418 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:55:52.905422 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:55:52.905426 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:55:52.905432 | orchestrator | 2026-03-08 00:55:52.905438 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-08 00:55:52.905443 | orchestrator | Sunday 08 March 2026 00:51:19 +0000 (0:00:00.693) 0:06:33.477 ********** 2026-03-08 00:55:52.905449 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:55:52.905454 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:55:52.905460 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:55:52.905465 | orchestrator | 2026-03-08 00:55:52.905471 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-08 00:55:52.905477 | orchestrator | Sunday 08 March 2026 00:51:20 +0000 (0:00:00.727) 0:06:34.205 ********** 2026-03-08 00:55:52.905482 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:55:52.905488 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:55:52.905495 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:55:52.905501 | orchestrator | 2026-03-08 00:55:52.905507 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-08 00:55:52.905517 | orchestrator | Sunday 08 March 2026 00:51:21 +0000 (0:00:00.852) 0:06:35.057 ********** 2026-03-08 00:55:52.905524 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.905535 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.905541 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.905547 | orchestrator | 2026-03-08 00:55:52.905553 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-08 00:55:52.905560 | orchestrator | Sunday 08 March 2026 00:51:21 +0000 (0:00:00.484) 0:06:35.541 ********** 2026-03-08 00:55:52.905565 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.905569 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.905573 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.905577 | orchestrator | 2026-03-08 00:55:52.905580 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-08 00:55:52.905584 | orchestrator | Sunday 08 March 2026 00:51:22 +0000 (0:00:00.274) 0:06:35.816 ********** 2026-03-08 00:55:52.905588 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.905592 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.905596 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.905599 | orchestrator | 2026-03-08 00:55:52.905603 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-08 00:55:52.905607 | orchestrator | Sunday 08 March 2026 00:51:22 +0000 (0:00:00.250) 0:06:36.066 ********** 2026-03-08 00:55:52.905611 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:55:52.905614 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:55:52.905618 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:55:52.905622 | orchestrator | 2026-03-08 00:55:52.905626 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-08 00:55:52.905629 | orchestrator | Sunday 08 March 2026 00:51:23 +0000 (0:00:00.818) 0:06:36.885 ********** 2026-03-08 00:55:52.905633 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:55:52.905637 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:55:52.905640 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:55:52.905644 | orchestrator | 2026-03-08 00:55:52.905648 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-08 00:55:52.905652 | orchestrator | Sunday 08 March 2026 00:51:24 +0000 (0:00:00.871) 0:06:37.757 ********** 2026-03-08 00:55:52.905656 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.905659 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.905663 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.905667 | orchestrator | 2026-03-08 00:55:52.905671 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-08 00:55:52.905674 | orchestrator | Sunday 08 March 2026 00:51:24 +0000 (0:00:00.260) 0:06:38.017 ********** 2026-03-08 00:55:52.905678 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.905682 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.905686 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.905689 | orchestrator | 2026-03-08 00:55:52.905693 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-08 00:55:52.905697 | orchestrator | Sunday 08 March 2026 00:51:24 +0000 (0:00:00.280) 0:06:38.298 ********** 2026-03-08 00:55:52.905701 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:55:52.905704 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:55:52.905708 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:55:52.905712 | orchestrator | 2026-03-08 00:55:52.905716 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-08 00:55:52.905719 | orchestrator | Sunday 08 March 2026 00:51:24 +0000 (0:00:00.263) 0:06:38.561 ********** 2026-03-08 00:55:52.905723 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:55:52.905727 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:55:52.905731 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:55:52.905734 | orchestrator | 2026-03-08 00:55:52.905738 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-08 00:55:52.905742 | orchestrator | Sunday 08 March 2026 00:51:25 +0000 (0:00:00.444) 0:06:39.006 ********** 2026-03-08 00:55:52.905746 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:55:52.905750 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:55:52.905771 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:55:52.905779 | orchestrator | 2026-03-08 00:55:52.905783 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-08 00:55:52.905786 | orchestrator | Sunday 08 March 2026 00:51:25 +0000 (0:00:00.291) 0:06:39.298 ********** 2026-03-08 00:55:52.905790 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.905794 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.905798 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.905801 | orchestrator | 2026-03-08 00:55:52.905805 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-08 00:55:52.905809 | orchestrator | Sunday 08 March 2026 00:51:25 +0000 (0:00:00.272) 0:06:39.570 ********** 2026-03-08 00:55:52.905813 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.905816 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.905820 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.905824 | orchestrator | 2026-03-08 00:55:52.905827 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-08 00:55:52.905831 | orchestrator | Sunday 08 March 2026 00:51:26 +0000 (0:00:00.256) 0:06:39.827 ********** 2026-03-08 00:55:52.905835 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.905839 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.905842 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.905846 | orchestrator | 2026-03-08 00:55:52.905850 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-08 00:55:52.905854 | orchestrator | Sunday 08 March 2026 00:51:26 +0000 (0:00:00.434) 0:06:40.261 ********** 2026-03-08 00:55:52.905857 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:55:52.905902 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:55:52.905907 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:55:52.905911 | orchestrator | 2026-03-08 00:55:52.905914 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-08 00:55:52.905918 | orchestrator | Sunday 08 March 2026 00:51:26 +0000 (0:00:00.291) 0:06:40.552 ********** 2026-03-08 00:55:52.905922 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:55:52.905926 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:55:52.905930 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:55:52.905933 | orchestrator | 2026-03-08 00:55:52.905937 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-03-08 00:55:52.905941 | orchestrator | Sunday 08 March 2026 00:51:27 +0000 (0:00:00.452) 0:06:41.005 ********** 2026-03-08 00:55:52.905948 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:55:52.905952 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:55:52.905956 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:55:52.905959 | orchestrator | 2026-03-08 00:55:52.905963 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-03-08 00:55:52.905967 | orchestrator | Sunday 08 March 2026 00:51:27 +0000 (0:00:00.486) 0:06:41.492 ********** 2026-03-08 00:55:52.905971 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-08 00:55:52.905975 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-08 00:55:52.905978 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-08 00:55:52.905982 | orchestrator | 2026-03-08 00:55:52.905986 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-03-08 00:55:52.905990 | orchestrator | Sunday 08 March 2026 00:51:28 +0000 (0:00:00.589) 0:06:42.081 ********** 2026-03-08 00:55:52.905994 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 00:55:52.905997 | orchestrator | 2026-03-08 00:55:52.906001 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-03-08 00:55:52.906005 | orchestrator | Sunday 08 March 2026 00:51:28 +0000 (0:00:00.520) 0:06:42.602 ********** 2026-03-08 00:55:52.906009 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.906040 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.906045 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.906052 | orchestrator | 2026-03-08 00:55:52.906056 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-03-08 00:55:52.906060 | orchestrator | Sunday 08 March 2026 00:51:29 +0000 (0:00:00.553) 0:06:43.155 ********** 2026-03-08 00:55:52.906064 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.906068 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.906071 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.906075 | orchestrator | 2026-03-08 00:55:52.906079 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-03-08 00:55:52.906083 | orchestrator | Sunday 08 March 2026 00:51:29 +0000 (0:00:00.327) 0:06:43.483 ********** 2026-03-08 00:55:52.906086 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:55:52.906090 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:55:52.906094 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:55:52.906098 | orchestrator | 2026-03-08 00:55:52.906102 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-03-08 00:55:52.906105 | orchestrator | Sunday 08 March 2026 00:51:30 +0000 (0:00:00.661) 0:06:44.145 ********** 2026-03-08 00:55:52.906109 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:55:52.906113 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:55:52.906117 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:55:52.906120 | orchestrator | 2026-03-08 00:55:52.906124 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-03-08 00:55:52.906128 | orchestrator | Sunday 08 March 2026 00:51:30 +0000 (0:00:00.347) 0:06:44.492 ********** 2026-03-08 00:55:52.906132 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-08 00:55:52.906136 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-08 00:55:52.906140 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-08 00:55:52.906144 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-08 00:55:52.906148 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-08 00:55:52.906156 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-08 00:55:52.906160 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-08 00:55:52.906164 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-08 00:55:52.906167 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-08 00:55:52.906171 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-08 00:55:52.906175 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-08 00:55:52.906179 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-08 00:55:52.906183 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-08 00:55:52.906186 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-08 00:55:52.906190 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-08 00:55:52.906194 | orchestrator | 2026-03-08 00:55:52.906198 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-03-08 00:55:52.906201 | orchestrator | Sunday 08 March 2026 00:51:36 +0000 (0:00:05.550) 0:06:50.042 ********** 2026-03-08 00:55:52.906205 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.906209 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.906213 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.906314 | orchestrator | 2026-03-08 00:55:52.906318 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-03-08 00:55:52.906322 | orchestrator | Sunday 08 March 2026 00:51:36 +0000 (0:00:00.363) 0:06:50.406 ********** 2026-03-08 00:55:52.906330 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 00:55:52.906334 | orchestrator | 2026-03-08 00:55:52.906337 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-03-08 00:55:52.906344 | orchestrator | Sunday 08 March 2026 00:51:37 +0000 (0:00:00.550) 0:06:50.956 ********** 2026-03-08 00:55:52.906348 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-08 00:55:52.906351 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-08 00:55:52.906355 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-08 00:55:52.906359 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-03-08 00:55:52.906363 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-03-08 00:55:52.906367 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-03-08 00:55:52.906371 | orchestrator | 2026-03-08 00:55:52.906374 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-03-08 00:55:52.906378 | orchestrator | Sunday 08 March 2026 00:51:38 +0000 (0:00:01.375) 0:06:52.332 ********** 2026-03-08 00:55:52.906382 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-08 00:55:52.906386 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-08 00:55:52.906390 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-08 00:55:52.906393 | orchestrator | 2026-03-08 00:55:52.906397 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-03-08 00:55:52.906401 | orchestrator | Sunday 08 March 2026 00:51:40 +0000 (0:00:02.219) 0:06:54.552 ********** 2026-03-08 00:55:52.906405 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-08 00:55:52.906409 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-08 00:55:52.906412 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:55:52.906416 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-08 00:55:52.906420 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-08 00:55:52.906424 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:55:52.906427 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-08 00:55:52.906431 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-08 00:55:52.906435 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:55:52.906439 | orchestrator | 2026-03-08 00:55:52.906443 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-03-08 00:55:52.906446 | orchestrator | Sunday 08 March 2026 00:51:42 +0000 (0:00:01.204) 0:06:55.757 ********** 2026-03-08 00:55:52.906450 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-08 00:55:52.906454 | orchestrator | 2026-03-08 00:55:52.906458 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-03-08 00:55:52.906461 | orchestrator | Sunday 08 March 2026 00:51:44 +0000 (0:00:02.273) 0:06:58.030 ********** 2026-03-08 00:55:52.906465 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 00:55:52.906469 | orchestrator | 2026-03-08 00:55:52.906473 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-03-08 00:55:52.906476 | orchestrator | Sunday 08 March 2026 00:51:44 +0000 (0:00:00.478) 0:06:58.509 ********** 2026-03-08 00:55:52.906480 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-5bde4b8d-c924-5d1f-8c9a-71f523250ead', 'data_vg': 'ceph-5bde4b8d-c924-5d1f-8c9a-71f523250ead'}) 2026-03-08 00:55:52.906486 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-fb6eff58-5334-5828-9091-c0c39e64aeb1', 'data_vg': 'ceph-fb6eff58-5334-5828-9091-c0c39e64aeb1'}) 2026-03-08 00:55:52.906497 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-e9614fc2-8329-596c-937c-60ceb39d5fd3', 'data_vg': 'ceph-e9614fc2-8329-596c-937c-60ceb39d5fd3'}) 2026-03-08 00:55:52.906501 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-e3bef375-74a7-543b-9618-1787c99aecbb', 'data_vg': 'ceph-e3bef375-74a7-543b-9618-1787c99aecbb'}) 2026-03-08 00:55:52.906532 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-ad275011-1eda-59d8-b818-a96e3c140717', 'data_vg': 'ceph-ad275011-1eda-59d8-b818-a96e3c140717'}) 2026-03-08 00:55:52.906537 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-eb569be8-41bf-5aa1-acb9-f145abad3137', 'data_vg': 'ceph-eb569be8-41bf-5aa1-acb9-f145abad3137'}) 2026-03-08 00:55:52.906540 | orchestrator | 2026-03-08 00:55:52.906544 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-03-08 00:55:52.906548 | orchestrator | Sunday 08 March 2026 00:52:23 +0000 (0:00:38.700) 0:07:37.209 ********** 2026-03-08 00:55:52.906552 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.906556 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.906559 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.906563 | orchestrator | 2026-03-08 00:55:52.906567 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-03-08 00:55:52.906571 | orchestrator | Sunday 08 March 2026 00:52:23 +0000 (0:00:00.357) 0:07:37.567 ********** 2026-03-08 00:55:52.906574 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 00:55:52.906578 | orchestrator | 2026-03-08 00:55:52.906582 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-03-08 00:55:52.906586 | orchestrator | Sunday 08 March 2026 00:52:24 +0000 (0:00:00.527) 0:07:38.094 ********** 2026-03-08 00:55:52.906590 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:55:52.906594 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:55:52.906598 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:55:52.906601 | orchestrator | 2026-03-08 00:55:52.906605 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-03-08 00:55:52.906609 | orchestrator | Sunday 08 March 2026 00:52:25 +0000 (0:00:01.011) 0:07:39.106 ********** 2026-03-08 00:55:52.906613 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:55:52.906620 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:55:52.906623 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:55:52.906627 | orchestrator | 2026-03-08 00:55:52.906631 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-03-08 00:55:52.906635 | orchestrator | Sunday 08 March 2026 00:52:28 +0000 (0:00:02.835) 0:07:41.942 ********** 2026-03-08 00:55:52.906639 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 00:55:52.906643 | orchestrator | 2026-03-08 00:55:52.906646 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-03-08 00:55:52.906650 | orchestrator | Sunday 08 March 2026 00:52:28 +0000 (0:00:00.569) 0:07:42.511 ********** 2026-03-08 00:55:52.906654 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:55:52.906660 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:55:52.906666 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:55:52.906672 | orchestrator | 2026-03-08 00:55:52.906677 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-03-08 00:55:52.906682 | orchestrator | Sunday 08 March 2026 00:52:30 +0000 (0:00:01.798) 0:07:44.309 ********** 2026-03-08 00:55:52.906688 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:55:52.906694 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:55:52.906699 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:55:52.906705 | orchestrator | 2026-03-08 00:55:52.906710 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-03-08 00:55:52.906716 | orchestrator | Sunday 08 March 2026 00:52:31 +0000 (0:00:01.301) 0:07:45.610 ********** 2026-03-08 00:55:52.906722 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:55:52.906728 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:55:52.906733 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:55:52.906740 | orchestrator | 2026-03-08 00:55:52.906745 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-03-08 00:55:52.906760 | orchestrator | Sunday 08 March 2026 00:52:33 +0000 (0:00:02.023) 0:07:47.634 ********** 2026-03-08 00:55:52.906765 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.906771 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.906776 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.906782 | orchestrator | 2026-03-08 00:55:52.906788 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-03-08 00:55:52.906794 | orchestrator | Sunday 08 March 2026 00:52:34 +0000 (0:00:00.367) 0:07:48.001 ********** 2026-03-08 00:55:52.906800 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.906806 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.906810 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.906814 | orchestrator | 2026-03-08 00:55:52.906818 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-03-08 00:55:52.906821 | orchestrator | Sunday 08 March 2026 00:52:34 +0000 (0:00:00.675) 0:07:48.677 ********** 2026-03-08 00:55:52.906825 | orchestrator | ok: [testbed-node-3] => (item=3) 2026-03-08 00:55:52.906829 | orchestrator | ok: [testbed-node-4] => (item=5) 2026-03-08 00:55:52.906833 | orchestrator | ok: [testbed-node-5] => (item=1) 2026-03-08 00:55:52.906836 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-08 00:55:52.906840 | orchestrator | ok: [testbed-node-4] => (item=2) 2026-03-08 00:55:52.906844 | orchestrator | ok: [testbed-node-5] => (item=4) 2026-03-08 00:55:52.906847 | orchestrator | 2026-03-08 00:55:52.906851 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-03-08 00:55:52.906855 | orchestrator | Sunday 08 March 2026 00:52:36 +0000 (0:00:01.134) 0:07:49.812 ********** 2026-03-08 00:55:52.906859 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-03-08 00:55:52.906882 | orchestrator | changed: [testbed-node-4] => (item=5) 2026-03-08 00:55:52.906887 | orchestrator | changed: [testbed-node-5] => (item=1) 2026-03-08 00:55:52.906890 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-03-08 00:55:52.906894 | orchestrator | changed: [testbed-node-4] => (item=2) 2026-03-08 00:55:52.906903 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-03-08 00:55:52.906907 | orchestrator | 2026-03-08 00:55:52.906910 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-03-08 00:55:52.906914 | orchestrator | Sunday 08 March 2026 00:52:38 +0000 (0:00:02.483) 0:07:52.295 ********** 2026-03-08 00:55:52.906918 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-03-08 00:55:52.906922 | orchestrator | changed: [testbed-node-4] => (item=5) 2026-03-08 00:55:52.906925 | orchestrator | changed: [testbed-node-5] => (item=1) 2026-03-08 00:55:52.906929 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-03-08 00:55:52.906933 | orchestrator | changed: [testbed-node-4] => (item=2) 2026-03-08 00:55:52.906937 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-03-08 00:55:52.906940 | orchestrator | 2026-03-08 00:55:52.906944 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-03-08 00:55:52.906948 | orchestrator | Sunday 08 March 2026 00:52:42 +0000 (0:00:03.954) 0:07:56.250 ********** 2026-03-08 00:55:52.906952 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.906955 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.906959 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-08 00:55:52.906963 | orchestrator | 2026-03-08 00:55:52.906967 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-03-08 00:55:52.906970 | orchestrator | Sunday 08 March 2026 00:52:46 +0000 (0:00:03.580) 0:07:59.830 ********** 2026-03-08 00:55:52.906974 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.906978 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.906982 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-03-08 00:55:52.906986 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-08 00:55:52.906989 | orchestrator | 2026-03-08 00:55:52.906993 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-03-08 00:55:52.907000 | orchestrator | Sunday 08 March 2026 00:52:58 +0000 (0:00:12.590) 0:08:12.421 ********** 2026-03-08 00:55:52.907004 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.907008 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.907012 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.907016 | orchestrator | 2026-03-08 00:55:52.907030 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-08 00:55:52.907034 | orchestrator | Sunday 08 March 2026 00:52:59 +0000 (0:00:01.022) 0:08:13.444 ********** 2026-03-08 00:55:52.907037 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.907041 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.907045 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.907049 | orchestrator | 2026-03-08 00:55:52.907052 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-08 00:55:52.907056 | orchestrator | Sunday 08 March 2026 00:53:00 +0000 (0:00:00.356) 0:08:13.801 ********** 2026-03-08 00:55:52.907060 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 00:55:52.907064 | orchestrator | 2026-03-08 00:55:52.907068 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-03-08 00:55:52.907071 | orchestrator | Sunday 08 March 2026 00:53:00 +0000 (0:00:00.540) 0:08:14.342 ********** 2026-03-08 00:55:52.907075 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-08 00:55:52.907079 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-08 00:55:52.907082 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-08 00:55:52.907086 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.907090 | orchestrator | 2026-03-08 00:55:52.907094 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-03-08 00:55:52.907097 | orchestrator | Sunday 08 March 2026 00:53:01 +0000 (0:00:00.780) 0:08:15.122 ********** 2026-03-08 00:55:52.907101 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.907105 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.907108 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.907112 | orchestrator | 2026-03-08 00:55:52.907116 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-03-08 00:55:52.907120 | orchestrator | Sunday 08 March 2026 00:53:02 +0000 (0:00:00.644) 0:08:15.767 ********** 2026-03-08 00:55:52.907123 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.907127 | orchestrator | 2026-03-08 00:55:52.907131 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-03-08 00:55:52.907134 | orchestrator | Sunday 08 March 2026 00:53:02 +0000 (0:00:00.260) 0:08:16.027 ********** 2026-03-08 00:55:52.907138 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.907142 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.907146 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.907149 | orchestrator | 2026-03-08 00:55:52.907153 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-03-08 00:55:52.907157 | orchestrator | Sunday 08 March 2026 00:53:02 +0000 (0:00:00.327) 0:08:16.355 ********** 2026-03-08 00:55:52.907161 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.907164 | orchestrator | 2026-03-08 00:55:52.907168 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-03-08 00:55:52.907172 | orchestrator | Sunday 08 March 2026 00:53:02 +0000 (0:00:00.241) 0:08:16.596 ********** 2026-03-08 00:55:52.907176 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.907179 | orchestrator | 2026-03-08 00:55:52.907183 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-03-08 00:55:52.907187 | orchestrator | Sunday 08 March 2026 00:53:03 +0000 (0:00:00.217) 0:08:16.814 ********** 2026-03-08 00:55:52.907190 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.907194 | orchestrator | 2026-03-08 00:55:52.907198 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-03-08 00:55:52.907204 | orchestrator | Sunday 08 March 2026 00:53:03 +0000 (0:00:00.172) 0:08:16.987 ********** 2026-03-08 00:55:52.907208 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.907212 | orchestrator | 2026-03-08 00:55:52.907219 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-03-08 00:55:52.907223 | orchestrator | Sunday 08 March 2026 00:53:03 +0000 (0:00:00.256) 0:08:17.244 ********** 2026-03-08 00:55:52.907226 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.907230 | orchestrator | 2026-03-08 00:55:52.907234 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-03-08 00:55:52.907238 | orchestrator | Sunday 08 March 2026 00:53:03 +0000 (0:00:00.235) 0:08:17.480 ********** 2026-03-08 00:55:52.907241 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-08 00:55:52.907245 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-08 00:55:52.907249 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-08 00:55:52.907253 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.907256 | orchestrator | 2026-03-08 00:55:52.907260 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-03-08 00:55:52.907264 | orchestrator | Sunday 08 March 2026 00:53:04 +0000 (0:00:01.040) 0:08:18.520 ********** 2026-03-08 00:55:52.907268 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.907271 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.907275 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.907279 | orchestrator | 2026-03-08 00:55:52.907283 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-03-08 00:55:52.907286 | orchestrator | Sunday 08 March 2026 00:53:05 +0000 (0:00:00.335) 0:08:18.856 ********** 2026-03-08 00:55:52.907290 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.907294 | orchestrator | 2026-03-08 00:55:52.907297 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-03-08 00:55:52.907301 | orchestrator | Sunday 08 March 2026 00:53:05 +0000 (0:00:00.234) 0:08:19.090 ********** 2026-03-08 00:55:52.907305 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.907309 | orchestrator | 2026-03-08 00:55:52.907312 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-03-08 00:55:52.907316 | orchestrator | 2026-03-08 00:55:52.907320 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-08 00:55:52.907324 | orchestrator | Sunday 08 March 2026 00:53:06 +0000 (0:00:00.719) 0:08:19.810 ********** 2026-03-08 00:55:52.907330 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:55:52.907336 | orchestrator | 2026-03-08 00:55:52.907339 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-08 00:55:52.907343 | orchestrator | Sunday 08 March 2026 00:53:07 +0000 (0:00:01.447) 0:08:21.257 ********** 2026-03-08 00:55:52.907347 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:55:52.907351 | orchestrator | 2026-03-08 00:55:52.907355 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-08 00:55:52.907359 | orchestrator | Sunday 08 March 2026 00:53:08 +0000 (0:00:01.384) 0:08:22.642 ********** 2026-03-08 00:55:52.907362 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.907366 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.907370 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.907373 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:55:52.907377 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:55:52.907381 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:55:52.907385 | orchestrator | 2026-03-08 00:55:52.907388 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-08 00:55:52.907392 | orchestrator | Sunday 08 March 2026 00:53:10 +0000 (0:00:01.308) 0:08:23.950 ********** 2026-03-08 00:55:52.907399 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.907402 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:55:52.907406 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.907410 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.907414 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:55:52.907417 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:55:52.907421 | orchestrator | 2026-03-08 00:55:52.907425 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-08 00:55:52.907429 | orchestrator | Sunday 08 March 2026 00:53:10 +0000 (0:00:00.730) 0:08:24.681 ********** 2026-03-08 00:55:52.907432 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:55:52.907436 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:55:52.907440 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.907443 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.907447 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.907451 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:55:52.907455 | orchestrator | 2026-03-08 00:55:52.907458 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-08 00:55:52.907462 | orchestrator | Sunday 08 March 2026 00:53:12 +0000 (0:00:01.121) 0:08:25.803 ********** 2026-03-08 00:55:52.907466 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.907470 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:55:52.907474 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.907477 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.907481 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:55:52.907485 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:55:52.907489 | orchestrator | 2026-03-08 00:55:52.907492 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-08 00:55:52.907496 | orchestrator | Sunday 08 March 2026 00:53:12 +0000 (0:00:00.801) 0:08:26.604 ********** 2026-03-08 00:55:52.907500 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.907504 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.907507 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.907511 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:55:52.907515 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:55:52.907518 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:55:52.907522 | orchestrator | 2026-03-08 00:55:52.907526 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-08 00:55:52.907530 | orchestrator | Sunday 08 March 2026 00:53:14 +0000 (0:00:01.353) 0:08:27.958 ********** 2026-03-08 00:55:52.907534 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.907537 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.907544 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.907548 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.907552 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.907556 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.907560 | orchestrator | 2026-03-08 00:55:52.907563 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-08 00:55:52.907567 | orchestrator | Sunday 08 March 2026 00:53:14 +0000 (0:00:00.618) 0:08:28.576 ********** 2026-03-08 00:55:52.907571 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.907575 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.907578 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.907617 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.907621 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.907625 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.907629 | orchestrator | 2026-03-08 00:55:52.907633 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-08 00:55:52.907637 | orchestrator | Sunday 08 March 2026 00:53:15 +0000 (0:00:00.935) 0:08:29.511 ********** 2026-03-08 00:55:52.907640 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:55:52.907644 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:55:52.907648 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:55:52.907655 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:55:52.907659 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:55:52.907663 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:55:52.907667 | orchestrator | 2026-03-08 00:55:52.907671 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-08 00:55:52.907675 | orchestrator | Sunday 08 March 2026 00:53:16 +0000 (0:00:01.063) 0:08:30.575 ********** 2026-03-08 00:55:52.907678 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:55:52.907682 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:55:52.907686 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:55:52.907690 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:55:52.907693 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:55:52.907697 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:55:52.907701 | orchestrator | 2026-03-08 00:55:52.907705 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-08 00:55:52.907709 | orchestrator | Sunday 08 March 2026 00:53:18 +0000 (0:00:01.483) 0:08:32.059 ********** 2026-03-08 00:55:52.907712 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.907716 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.907720 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.907724 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.907730 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.907734 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.907738 | orchestrator | 2026-03-08 00:55:52.907742 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-08 00:55:52.907746 | orchestrator | Sunday 08 March 2026 00:53:18 +0000 (0:00:00.618) 0:08:32.678 ********** 2026-03-08 00:55:52.907749 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.907753 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.907757 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.907761 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:55:52.907764 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:55:52.907768 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:55:52.907772 | orchestrator | 2026-03-08 00:55:52.907776 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-08 00:55:52.907780 | orchestrator | Sunday 08 March 2026 00:53:19 +0000 (0:00:00.824) 0:08:33.502 ********** 2026-03-08 00:55:52.907783 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:55:52.907787 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:55:52.907791 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:55:52.907795 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.907799 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.907802 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.907806 | orchestrator | 2026-03-08 00:55:52.907810 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-08 00:55:52.907814 | orchestrator | Sunday 08 March 2026 00:53:20 +0000 (0:00:00.600) 0:08:34.103 ********** 2026-03-08 00:55:52.907817 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:55:52.907821 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:55:52.907825 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:55:52.907829 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.907833 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.907837 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.907840 | orchestrator | 2026-03-08 00:55:52.907844 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-08 00:55:52.907848 | orchestrator | Sunday 08 March 2026 00:53:21 +0000 (0:00:00.806) 0:08:34.909 ********** 2026-03-08 00:55:52.907852 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:55:52.907856 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:55:52.907874 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:55:52.907878 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.907882 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.907886 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.907890 | orchestrator | 2026-03-08 00:55:52.907894 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-08 00:55:52.907901 | orchestrator | Sunday 08 March 2026 00:53:21 +0000 (0:00:00.615) 0:08:35.525 ********** 2026-03-08 00:55:52.907904 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.907908 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.907912 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.907916 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.907919 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.907923 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.907927 | orchestrator | 2026-03-08 00:55:52.907931 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-08 00:55:52.907934 | orchestrator | Sunday 08 March 2026 00:53:22 +0000 (0:00:00.813) 0:08:36.338 ********** 2026-03-08 00:55:52.907938 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.907942 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.907946 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.907949 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:55:52.907953 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:55:52.907957 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:55:52.907960 | orchestrator | 2026-03-08 00:55:52.907964 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-08 00:55:52.907968 | orchestrator | Sunday 08 March 2026 00:53:23 +0000 (0:00:00.594) 0:08:36.932 ********** 2026-03-08 00:55:52.907976 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.907980 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.907983 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.907987 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:55:52.907991 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:55:52.907995 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:55:52.907998 | orchestrator | 2026-03-08 00:55:52.908002 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-08 00:55:52.908006 | orchestrator | Sunday 08 March 2026 00:53:24 +0000 (0:00:00.854) 0:08:37.787 ********** 2026-03-08 00:55:52.908010 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:55:52.908013 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:55:52.908017 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:55:52.908021 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:55:52.908024 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:55:52.908028 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:55:52.908032 | orchestrator | 2026-03-08 00:55:52.908036 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-08 00:55:52.908040 | orchestrator | Sunday 08 March 2026 00:53:24 +0000 (0:00:00.680) 0:08:38.468 ********** 2026-03-08 00:55:52.908043 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:55:52.908047 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:55:52.908051 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:55:52.908054 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:55:52.908058 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:55:52.908062 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:55:52.908066 | orchestrator | 2026-03-08 00:55:52.908069 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-03-08 00:55:52.908073 | orchestrator | Sunday 08 March 2026 00:53:26 +0000 (0:00:01.306) 0:08:39.774 ********** 2026-03-08 00:55:52.908077 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-08 00:55:52.908081 | orchestrator | 2026-03-08 00:55:52.908085 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-03-08 00:55:52.908088 | orchestrator | Sunday 08 March 2026 00:53:30 +0000 (0:00:04.155) 0:08:43.930 ********** 2026-03-08 00:55:52.908092 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-08 00:55:52.908096 | orchestrator | 2026-03-08 00:55:52.908100 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-03-08 00:55:52.908103 | orchestrator | Sunday 08 March 2026 00:53:32 +0000 (0:00:02.118) 0:08:46.048 ********** 2026-03-08 00:55:52.908107 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:55:52.908216 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:55:52.908234 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:55:52.908240 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:55:52.908245 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:55:52.908250 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:55:52.908256 | orchestrator | 2026-03-08 00:55:52.908262 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-03-08 00:55:52.908268 | orchestrator | Sunday 08 March 2026 00:53:34 +0000 (0:00:01.714) 0:08:47.763 ********** 2026-03-08 00:55:52.908274 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:55:52.908280 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:55:52.908286 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:55:52.908292 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:55:52.908298 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:55:52.908304 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:55:52.908309 | orchestrator | 2026-03-08 00:55:52.908315 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-03-08 00:55:52.908321 | orchestrator | Sunday 08 March 2026 00:53:35 +0000 (0:00:00.978) 0:08:48.741 ********** 2026-03-08 00:55:52.908328 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:55:52.908368 | orchestrator | 2026-03-08 00:55:52.908373 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-03-08 00:55:52.908377 | orchestrator | Sunday 08 March 2026 00:53:36 +0000 (0:00:01.328) 0:08:50.070 ********** 2026-03-08 00:55:52.908381 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:55:52.908385 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:55:52.908388 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:55:52.908392 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:55:52.908396 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:55:52.908660 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:55:52.908667 | orchestrator | 2026-03-08 00:55:52.908671 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-03-08 00:55:52.908675 | orchestrator | Sunday 08 March 2026 00:53:38 +0000 (0:00:01.785) 0:08:51.856 ********** 2026-03-08 00:55:52.908679 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:55:52.908683 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:55:52.908687 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:55:52.908690 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:55:52.908694 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:55:52.908698 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:55:52.908702 | orchestrator | 2026-03-08 00:55:52.908706 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-03-08 00:55:52.908710 | orchestrator | Sunday 08 March 2026 00:53:41 +0000 (0:00:03.436) 0:08:55.293 ********** 2026-03-08 00:55:52.908714 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:55:52.908718 | orchestrator | 2026-03-08 00:55:52.908722 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-03-08 00:55:52.908726 | orchestrator | Sunday 08 March 2026 00:53:42 +0000 (0:00:01.409) 0:08:56.702 ********** 2026-03-08 00:55:52.908729 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:55:52.908733 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:55:52.908737 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:55:52.908741 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:55:52.908745 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:55:52.908749 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:55:52.908752 | orchestrator | 2026-03-08 00:55:52.908756 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-03-08 00:55:52.908760 | orchestrator | Sunday 08 March 2026 00:53:43 +0000 (0:00:00.848) 0:08:57.551 ********** 2026-03-08 00:55:52.908764 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:55:52.908773 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:55:52.908784 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:55:52.908788 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:55:52.908792 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:55:52.908795 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:55:52.908799 | orchestrator | 2026-03-08 00:55:52.908803 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-03-08 00:55:52.908807 | orchestrator | Sunday 08 March 2026 00:53:46 +0000 (0:00:02.290) 0:08:59.842 ********** 2026-03-08 00:55:52.908811 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:55:52.908814 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:55:52.908818 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:55:52.908822 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:55:52.908826 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:55:52.908829 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:55:52.908833 | orchestrator | 2026-03-08 00:55:52.908837 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-03-08 00:55:52.908841 | orchestrator | 2026-03-08 00:55:52.908845 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-08 00:55:52.908849 | orchestrator | Sunday 08 March 2026 00:53:47 +0000 (0:00:01.086) 0:09:00.928 ********** 2026-03-08 00:55:52.908853 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 00:55:52.908857 | orchestrator | 2026-03-08 00:55:52.908884 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-08 00:55:52.908888 | orchestrator | Sunday 08 March 2026 00:53:47 +0000 (0:00:00.502) 0:09:01.430 ********** 2026-03-08 00:55:52.908892 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 00:55:52.908895 | orchestrator | 2026-03-08 00:55:52.908899 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-08 00:55:52.908903 | orchestrator | Sunday 08 March 2026 00:53:48 +0000 (0:00:00.785) 0:09:02.216 ********** 2026-03-08 00:55:52.908907 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.908910 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.908914 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.908918 | orchestrator | 2026-03-08 00:55:52.908926 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-08 00:55:52.908930 | orchestrator | Sunday 08 March 2026 00:53:48 +0000 (0:00:00.324) 0:09:02.541 ********** 2026-03-08 00:55:52.908934 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:55:52.908938 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:55:52.908942 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:55:52.908945 | orchestrator | 2026-03-08 00:55:52.908949 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-08 00:55:52.908953 | orchestrator | Sunday 08 March 2026 00:53:49 +0000 (0:00:00.746) 0:09:03.288 ********** 2026-03-08 00:55:52.908957 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:55:52.908961 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:55:52.908964 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:55:52.908968 | orchestrator | 2026-03-08 00:55:52.908972 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-08 00:55:52.908976 | orchestrator | Sunday 08 March 2026 00:53:50 +0000 (0:00:01.088) 0:09:04.376 ********** 2026-03-08 00:55:52.908979 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:55:52.908983 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:55:52.908987 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:55:52.908991 | orchestrator | 2026-03-08 00:55:52.908994 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-08 00:55:52.908998 | orchestrator | Sunday 08 March 2026 00:53:51 +0000 (0:00:00.835) 0:09:05.212 ********** 2026-03-08 00:55:52.909002 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.909006 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.909010 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.909014 | orchestrator | 2026-03-08 00:55:52.909020 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-08 00:55:52.909024 | orchestrator | Sunday 08 March 2026 00:53:51 +0000 (0:00:00.335) 0:09:05.548 ********** 2026-03-08 00:55:52.909028 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.909032 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.909036 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.909039 | orchestrator | 2026-03-08 00:55:52.909043 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-08 00:55:52.909047 | orchestrator | Sunday 08 March 2026 00:53:52 +0000 (0:00:00.337) 0:09:05.885 ********** 2026-03-08 00:55:52.909051 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.909054 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.909058 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.909062 | orchestrator | 2026-03-08 00:55:52.909096 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-08 00:55:52.909101 | orchestrator | Sunday 08 March 2026 00:53:52 +0000 (0:00:00.602) 0:09:06.488 ********** 2026-03-08 00:55:52.909104 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:55:52.909108 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:55:52.909112 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:55:52.909116 | orchestrator | 2026-03-08 00:55:52.909119 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-08 00:55:52.909123 | orchestrator | Sunday 08 March 2026 00:53:53 +0000 (0:00:00.839) 0:09:07.328 ********** 2026-03-08 00:55:52.909127 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:55:52.909131 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:55:52.909134 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:55:52.909138 | orchestrator | 2026-03-08 00:55:52.909142 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-08 00:55:52.909146 | orchestrator | Sunday 08 March 2026 00:53:54 +0000 (0:00:01.139) 0:09:08.467 ********** 2026-03-08 00:55:52.909150 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.909153 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.909157 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.909161 | orchestrator | 2026-03-08 00:55:52.909165 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-08 00:55:52.909169 | orchestrator | Sunday 08 March 2026 00:53:55 +0000 (0:00:00.452) 0:09:08.920 ********** 2026-03-08 00:55:52.909172 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.909180 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.909184 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.909187 | orchestrator | 2026-03-08 00:55:52.909191 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-08 00:55:52.909195 | orchestrator | Sunday 08 March 2026 00:53:55 +0000 (0:00:00.594) 0:09:09.514 ********** 2026-03-08 00:55:52.909199 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:55:52.909203 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:55:52.909207 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:55:52.909212 | orchestrator | 2026-03-08 00:55:52.909217 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-08 00:55:52.909221 | orchestrator | Sunday 08 March 2026 00:53:56 +0000 (0:00:00.332) 0:09:09.847 ********** 2026-03-08 00:55:52.909226 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:55:52.909230 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:55:52.909235 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:55:52.909239 | orchestrator | 2026-03-08 00:55:52.909243 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-08 00:55:52.909248 | orchestrator | Sunday 08 March 2026 00:53:56 +0000 (0:00:00.335) 0:09:10.182 ********** 2026-03-08 00:55:52.909252 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:55:52.909256 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:55:52.909261 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:55:52.909265 | orchestrator | 2026-03-08 00:55:52.909270 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-08 00:55:52.909277 | orchestrator | Sunday 08 March 2026 00:53:56 +0000 (0:00:00.347) 0:09:10.529 ********** 2026-03-08 00:55:52.909282 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.909286 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.909291 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.909295 | orchestrator | 2026-03-08 00:55:52.909300 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-08 00:55:52.909304 | orchestrator | Sunday 08 March 2026 00:53:57 +0000 (0:00:00.589) 0:09:11.118 ********** 2026-03-08 00:55:52.909309 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.909313 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.909318 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.909322 | orchestrator | 2026-03-08 00:55:52.909326 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-08 00:55:52.909337 | orchestrator | Sunday 08 March 2026 00:53:57 +0000 (0:00:00.298) 0:09:11.417 ********** 2026-03-08 00:55:52.909341 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.909346 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.909350 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.909354 | orchestrator | 2026-03-08 00:55:52.909358 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-08 00:55:52.909363 | orchestrator | Sunday 08 March 2026 00:53:57 +0000 (0:00:00.298) 0:09:11.715 ********** 2026-03-08 00:55:52.909367 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:55:52.909372 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:55:52.909376 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:55:52.909380 | orchestrator | 2026-03-08 00:55:52.909385 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-08 00:55:52.909389 | orchestrator | Sunday 08 March 2026 00:53:58 +0000 (0:00:00.358) 0:09:12.073 ********** 2026-03-08 00:55:52.909394 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:55:52.909398 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:55:52.909403 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:55:52.909407 | orchestrator | 2026-03-08 00:55:52.909411 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-03-08 00:55:52.909416 | orchestrator | Sunday 08 March 2026 00:53:59 +0000 (0:00:01.086) 0:09:13.159 ********** 2026-03-08 00:55:52.909420 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.909424 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.909429 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-03-08 00:55:52.909433 | orchestrator | 2026-03-08 00:55:52.909438 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-03-08 00:55:52.909442 | orchestrator | Sunday 08 March 2026 00:53:59 +0000 (0:00:00.517) 0:09:13.677 ********** 2026-03-08 00:55:52.909446 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-08 00:55:52.909450 | orchestrator | 2026-03-08 00:55:52.909455 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-03-08 00:55:52.909459 | orchestrator | Sunday 08 March 2026 00:54:02 +0000 (0:00:02.215) 0:09:15.893 ********** 2026-03-08 00:55:52.909465 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-03-08 00:55:52.909471 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.909475 | orchestrator | 2026-03-08 00:55:52.909523 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-03-08 00:55:52.909528 | orchestrator | Sunday 08 March 2026 00:54:02 +0000 (0:00:00.179) 0:09:16.072 ********** 2026-03-08 00:55:52.909534 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-08 00:55:52.909544 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-08 00:55:52.909553 | orchestrator | 2026-03-08 00:55:52.909558 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-03-08 00:55:52.909562 | orchestrator | Sunday 08 March 2026 00:54:10 +0000 (0:00:08.485) 0:09:24.557 ********** 2026-03-08 00:55:52.909571 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-08 00:55:52.909576 | orchestrator | 2026-03-08 00:55:52.909580 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-03-08 00:55:52.909584 | orchestrator | Sunday 08 March 2026 00:54:14 +0000 (0:00:04.015) 0:09:28.573 ********** 2026-03-08 00:55:52.909588 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 00:55:52.909592 | orchestrator | 2026-03-08 00:55:52.909596 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-03-08 00:55:52.909599 | orchestrator | Sunday 08 March 2026 00:54:15 +0000 (0:00:00.733) 0:09:29.307 ********** 2026-03-08 00:55:52.909603 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-08 00:55:52.909607 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-08 00:55:52.909611 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-08 00:55:52.909615 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-03-08 00:55:52.909618 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-03-08 00:55:52.909622 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-03-08 00:55:52.909626 | orchestrator | 2026-03-08 00:55:52.909630 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-03-08 00:55:52.909633 | orchestrator | Sunday 08 March 2026 00:54:16 +0000 (0:00:01.374) 0:09:30.681 ********** 2026-03-08 00:55:52.909637 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-08 00:55:52.909641 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-08 00:55:52.909645 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-08 00:55:52.909649 | orchestrator | 2026-03-08 00:55:52.909652 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-03-08 00:55:52.909656 | orchestrator | Sunday 08 March 2026 00:54:19 +0000 (0:00:02.221) 0:09:32.903 ********** 2026-03-08 00:55:52.909663 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-08 00:55:52.909667 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-08 00:55:52.909671 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:55:52.909675 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-08 00:55:52.909679 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-08 00:55:52.909682 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:55:52.909686 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-08 00:55:52.909690 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-08 00:55:52.909694 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:55:52.909697 | orchestrator | 2026-03-08 00:55:52.909701 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-03-08 00:55:52.909705 | orchestrator | Sunday 08 March 2026 00:54:20 +0000 (0:00:01.537) 0:09:34.441 ********** 2026-03-08 00:55:52.909709 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:55:52.909712 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:55:52.909716 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:55:52.909720 | orchestrator | 2026-03-08 00:55:52.909724 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-03-08 00:55:52.909728 | orchestrator | Sunday 08 March 2026 00:54:23 +0000 (0:00:02.565) 0:09:37.006 ********** 2026-03-08 00:55:52.909735 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.909738 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.909742 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.909746 | orchestrator | 2026-03-08 00:55:52.909750 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-03-08 00:55:52.909754 | orchestrator | Sunday 08 March 2026 00:54:23 +0000 (0:00:00.443) 0:09:37.450 ********** 2026-03-08 00:55:52.909758 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 00:55:52.909761 | orchestrator | 2026-03-08 00:55:52.909765 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-03-08 00:55:52.909769 | orchestrator | Sunday 08 March 2026 00:54:24 +0000 (0:00:00.908) 0:09:38.358 ********** 2026-03-08 00:55:52.909773 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 00:55:52.909776 | orchestrator | 2026-03-08 00:55:52.909780 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-03-08 00:55:52.909784 | orchestrator | Sunday 08 March 2026 00:54:25 +0000 (0:00:00.590) 0:09:38.949 ********** 2026-03-08 00:55:52.909788 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:55:52.909791 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:55:52.909849 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:55:52.909854 | orchestrator | 2026-03-08 00:55:52.909857 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-03-08 00:55:52.909894 | orchestrator | Sunday 08 March 2026 00:54:26 +0000 (0:00:01.177) 0:09:40.126 ********** 2026-03-08 00:55:52.909898 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:55:52.909902 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:55:52.909905 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:55:52.909909 | orchestrator | 2026-03-08 00:55:52.909913 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-03-08 00:55:52.909917 | orchestrator | Sunday 08 March 2026 00:54:27 +0000 (0:00:01.536) 0:09:41.663 ********** 2026-03-08 00:55:52.909920 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:55:52.909924 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:55:52.909928 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:55:52.909932 | orchestrator | 2026-03-08 00:55:52.909935 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-03-08 00:55:52.909939 | orchestrator | Sunday 08 March 2026 00:54:30 +0000 (0:00:02.105) 0:09:43.769 ********** 2026-03-08 00:55:52.909943 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:55:52.909950 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:55:52.909954 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:55:52.909958 | orchestrator | 2026-03-08 00:55:52.909962 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-03-08 00:55:52.909965 | orchestrator | Sunday 08 March 2026 00:54:32 +0000 (0:00:02.221) 0:09:45.990 ********** 2026-03-08 00:55:52.909969 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:55:52.909973 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:55:52.909977 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:55:52.909980 | orchestrator | 2026-03-08 00:55:52.909984 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-08 00:55:52.909988 | orchestrator | Sunday 08 March 2026 00:54:33 +0000 (0:00:01.534) 0:09:47.525 ********** 2026-03-08 00:55:52.909992 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:55:52.909996 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:55:52.910000 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:55:52.910003 | orchestrator | 2026-03-08 00:55:52.910007 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-08 00:55:52.910011 | orchestrator | Sunday 08 March 2026 00:54:34 +0000 (0:00:00.774) 0:09:48.299 ********** 2026-03-08 00:55:52.910042 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 00:55:52.910046 | orchestrator | 2026-03-08 00:55:52.910054 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-03-08 00:55:52.910057 | orchestrator | Sunday 08 March 2026 00:54:35 +0000 (0:00:00.863) 0:09:49.163 ********** 2026-03-08 00:55:52.910061 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:55:52.910065 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:55:52.910069 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:55:52.910073 | orchestrator | 2026-03-08 00:55:52.910076 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-03-08 00:55:52.910080 | orchestrator | Sunday 08 March 2026 00:54:35 +0000 (0:00:00.350) 0:09:49.514 ********** 2026-03-08 00:55:52.910084 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:55:52.910087 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:55:52.910091 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:55:52.910095 | orchestrator | 2026-03-08 00:55:52.910099 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-03-08 00:55:52.910105 | orchestrator | Sunday 08 March 2026 00:54:37 +0000 (0:00:01.231) 0:09:50.746 ********** 2026-03-08 00:55:52.910109 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-08 00:55:52.910113 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-08 00:55:52.910116 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-08 00:55:52.910120 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.910124 | orchestrator | 2026-03-08 00:55:52.910127 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-03-08 00:55:52.910131 | orchestrator | Sunday 08 March 2026 00:54:37 +0000 (0:00:00.898) 0:09:51.644 ********** 2026-03-08 00:55:52.910135 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:55:52.910139 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:55:52.910142 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:55:52.910146 | orchestrator | 2026-03-08 00:55:52.910150 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-03-08 00:55:52.910154 | orchestrator | 2026-03-08 00:55:52.910157 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-08 00:55:52.910161 | orchestrator | Sunday 08 March 2026 00:54:38 +0000 (0:00:00.878) 0:09:52.523 ********** 2026-03-08 00:55:52.910165 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 00:55:52.910169 | orchestrator | 2026-03-08 00:55:52.910172 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-08 00:55:52.910176 | orchestrator | Sunday 08 March 2026 00:54:39 +0000 (0:00:00.531) 0:09:53.055 ********** 2026-03-08 00:55:52.910180 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 00:55:52.910183 | orchestrator | 2026-03-08 00:55:52.910187 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-08 00:55:52.910191 | orchestrator | Sunday 08 March 2026 00:54:40 +0000 (0:00:00.790) 0:09:53.845 ********** 2026-03-08 00:55:52.910195 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.910198 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.910202 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.910206 | orchestrator | 2026-03-08 00:55:52.910209 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-08 00:55:52.910213 | orchestrator | Sunday 08 March 2026 00:54:40 +0000 (0:00:00.309) 0:09:54.155 ********** 2026-03-08 00:55:52.910217 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:55:52.910221 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:55:52.910224 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:55:52.910228 | orchestrator | 2026-03-08 00:55:52.910232 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-08 00:55:52.910236 | orchestrator | Sunday 08 March 2026 00:54:41 +0000 (0:00:00.769) 0:09:54.924 ********** 2026-03-08 00:55:52.910239 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:55:52.910243 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:55:52.910250 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:55:52.910254 | orchestrator | 2026-03-08 00:55:52.910257 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-08 00:55:52.910261 | orchestrator | Sunday 08 March 2026 00:54:41 +0000 (0:00:00.740) 0:09:55.665 ********** 2026-03-08 00:55:52.910265 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:55:52.910268 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:55:52.910272 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:55:52.910276 | orchestrator | 2026-03-08 00:55:52.910279 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-08 00:55:52.910283 | orchestrator | Sunday 08 March 2026 00:54:43 +0000 (0:00:01.078) 0:09:56.744 ********** 2026-03-08 00:55:52.910287 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.910291 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.910294 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.910298 | orchestrator | 2026-03-08 00:55:52.910305 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-08 00:55:52.910309 | orchestrator | Sunday 08 March 2026 00:54:43 +0000 (0:00:00.325) 0:09:57.069 ********** 2026-03-08 00:55:52.910313 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.910317 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.910321 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.910324 | orchestrator | 2026-03-08 00:55:52.910328 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-08 00:55:52.910332 | orchestrator | Sunday 08 March 2026 00:54:43 +0000 (0:00:00.323) 0:09:57.393 ********** 2026-03-08 00:55:52.910336 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.910340 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.910343 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.910347 | orchestrator | 2026-03-08 00:55:52.910351 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-08 00:55:52.910355 | orchestrator | Sunday 08 March 2026 00:54:43 +0000 (0:00:00.320) 0:09:57.713 ********** 2026-03-08 00:55:52.910359 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:55:52.910362 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:55:52.910366 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:55:52.910370 | orchestrator | 2026-03-08 00:55:52.910374 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-08 00:55:52.910377 | orchestrator | Sunday 08 March 2026 00:54:44 +0000 (0:00:01.014) 0:09:58.728 ********** 2026-03-08 00:55:52.910381 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:55:52.910385 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:55:52.910389 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:55:52.910392 | orchestrator | 2026-03-08 00:55:52.910396 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-08 00:55:52.910400 | orchestrator | Sunday 08 March 2026 00:54:45 +0000 (0:00:00.786) 0:09:59.515 ********** 2026-03-08 00:55:52.910404 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.910407 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.910411 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.910415 | orchestrator | 2026-03-08 00:55:52.910419 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-08 00:55:52.910422 | orchestrator | Sunday 08 March 2026 00:54:46 +0000 (0:00:00.290) 0:09:59.805 ********** 2026-03-08 00:55:52.910426 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.910430 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.910436 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.910440 | orchestrator | 2026-03-08 00:55:52.910444 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-08 00:55:52.910448 | orchestrator | Sunday 08 March 2026 00:54:46 +0000 (0:00:00.304) 0:10:00.110 ********** 2026-03-08 00:55:52.910452 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:55:52.910455 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:55:52.910459 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:55:52.910463 | orchestrator | 2026-03-08 00:55:52.910469 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-08 00:55:52.910473 | orchestrator | Sunday 08 March 2026 00:54:46 +0000 (0:00:00.627) 0:10:00.738 ********** 2026-03-08 00:55:52.910477 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:55:52.910481 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:55:52.910484 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:55:52.910488 | orchestrator | 2026-03-08 00:55:52.910492 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-08 00:55:52.910495 | orchestrator | Sunday 08 March 2026 00:54:47 +0000 (0:00:00.351) 0:10:01.089 ********** 2026-03-08 00:55:52.910499 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:55:52.910503 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:55:52.910507 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:55:52.910510 | orchestrator | 2026-03-08 00:55:52.910514 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-08 00:55:52.910518 | orchestrator | Sunday 08 March 2026 00:54:47 +0000 (0:00:00.321) 0:10:01.411 ********** 2026-03-08 00:55:52.910522 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.910525 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.910529 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.910716 | orchestrator | 2026-03-08 00:55:52.910723 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-08 00:55:52.910727 | orchestrator | Sunday 08 March 2026 00:54:47 +0000 (0:00:00.313) 0:10:01.724 ********** 2026-03-08 00:55:52.910731 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.910735 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.910738 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.910742 | orchestrator | 2026-03-08 00:55:52.910746 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-08 00:55:52.910750 | orchestrator | Sunday 08 March 2026 00:54:48 +0000 (0:00:00.557) 0:10:02.281 ********** 2026-03-08 00:55:52.910754 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.910757 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.910761 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.910765 | orchestrator | 2026-03-08 00:55:52.910768 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-08 00:55:52.910772 | orchestrator | Sunday 08 March 2026 00:54:48 +0000 (0:00:00.290) 0:10:02.572 ********** 2026-03-08 00:55:52.910776 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:55:52.910780 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:55:52.910784 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:55:52.910787 | orchestrator | 2026-03-08 00:55:52.910791 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-08 00:55:52.910795 | orchestrator | Sunday 08 March 2026 00:54:49 +0000 (0:00:00.361) 0:10:02.934 ********** 2026-03-08 00:55:52.910799 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:55:52.910802 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:55:52.910806 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:55:52.910858 | orchestrator | 2026-03-08 00:55:52.910875 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-03-08 00:55:52.910879 | orchestrator | Sunday 08 March 2026 00:54:49 +0000 (0:00:00.803) 0:10:03.737 ********** 2026-03-08 00:55:52.910883 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 00:55:52.910887 | orchestrator | 2026-03-08 00:55:52.910891 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-08 00:55:52.910898 | orchestrator | Sunday 08 March 2026 00:54:50 +0000 (0:00:00.530) 0:10:04.268 ********** 2026-03-08 00:55:52.910902 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-08 00:55:52.910906 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-08 00:55:52.910910 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-08 00:55:52.910914 | orchestrator | 2026-03-08 00:55:52.910918 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-08 00:55:52.910926 | orchestrator | Sunday 08 March 2026 00:54:52 +0000 (0:00:02.197) 0:10:06.466 ********** 2026-03-08 00:55:52.910930 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-08 00:55:52.910934 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-08 00:55:52.910938 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:55:52.910941 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-08 00:55:52.910945 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-08 00:55:52.910949 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-08 00:55:52.910953 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:55:52.910956 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-08 00:55:52.910960 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:55:52.910964 | orchestrator | 2026-03-08 00:55:52.910967 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-03-08 00:55:52.910971 | orchestrator | Sunday 08 March 2026 00:54:54 +0000 (0:00:01.408) 0:10:07.874 ********** 2026-03-08 00:55:52.910975 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.910979 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.910983 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.910986 | orchestrator | 2026-03-08 00:55:52.910990 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-03-08 00:55:52.910994 | orchestrator | Sunday 08 March 2026 00:54:54 +0000 (0:00:00.318) 0:10:08.193 ********** 2026-03-08 00:55:52.910998 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 00:55:52.911002 | orchestrator | 2026-03-08 00:55:52.911005 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-03-08 00:55:52.911012 | orchestrator | Sunday 08 March 2026 00:54:55 +0000 (0:00:00.545) 0:10:08.739 ********** 2026-03-08 00:55:52.911017 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-08 00:55:52.911021 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-08 00:55:52.911033 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-08 00:55:52.911037 | orchestrator | 2026-03-08 00:55:52.911041 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-03-08 00:55:52.911053 | orchestrator | Sunday 08 March 2026 00:54:56 +0000 (0:00:01.430) 0:10:10.169 ********** 2026-03-08 00:55:52.911059 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-08 00:55:52.911065 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-08 00:55:52.911071 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-08 00:55:52.911077 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-08 00:55:52.911083 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-08 00:55:52.911089 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-08 00:55:52.911116 | orchestrator | 2026-03-08 00:55:52.911120 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-08 00:55:52.911124 | orchestrator | Sunday 08 March 2026 00:55:01 +0000 (0:00:04.783) 0:10:14.953 ********** 2026-03-08 00:55:52.911128 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-08 00:55:52.911132 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-08 00:55:52.911135 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-08 00:55:52.911143 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-08 00:55:52.911147 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-08 00:55:52.911150 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-08 00:55:52.911154 | orchestrator | 2026-03-08 00:55:52.911158 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-08 00:55:52.911162 | orchestrator | Sunday 08 March 2026 00:55:03 +0000 (0:00:02.677) 0:10:17.630 ********** 2026-03-08 00:55:52.911165 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-08 00:55:52.911169 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:55:52.911173 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-08 00:55:52.911177 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:55:52.911180 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-08 00:55:52.911184 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:55:52.911188 | orchestrator | 2026-03-08 00:55:52.911192 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-03-08 00:55:52.911199 | orchestrator | Sunday 08 March 2026 00:55:05 +0000 (0:00:01.642) 0:10:19.273 ********** 2026-03-08 00:55:52.911203 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-03-08 00:55:52.911206 | orchestrator | 2026-03-08 00:55:52.911210 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-03-08 00:55:52.911214 | orchestrator | Sunday 08 March 2026 00:55:05 +0000 (0:00:00.230) 0:10:19.504 ********** 2026-03-08 00:55:52.911218 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-08 00:55:52.911222 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-08 00:55:52.911226 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-08 00:55:52.911229 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-08 00:55:52.911233 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-08 00:55:52.911237 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.911241 | orchestrator | 2026-03-08 00:55:52.911245 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-03-08 00:55:52.911248 | orchestrator | Sunday 08 March 2026 00:55:06 +0000 (0:00:01.122) 0:10:20.627 ********** 2026-03-08 00:55:52.911252 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-08 00:55:52.911256 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-08 00:55:52.911263 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-08 00:55:52.911267 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-08 00:55:52.911272 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-08 00:55:52.911276 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.911280 | orchestrator | 2026-03-08 00:55:52.911285 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-03-08 00:55:52.911289 | orchestrator | Sunday 08 March 2026 00:55:07 +0000 (0:00:00.591) 0:10:21.218 ********** 2026-03-08 00:55:52.911294 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-08 00:55:52.911306 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-08 00:55:52.911310 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-08 00:55:52.911315 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-08 00:55:52.911319 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-08 00:55:52.911323 | orchestrator | 2026-03-08 00:55:52.911328 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-03-08 00:55:52.911332 | orchestrator | Sunday 08 March 2026 00:55:38 +0000 (0:00:31.226) 0:10:52.445 ********** 2026-03-08 00:55:52.911337 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.911341 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.911345 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.911350 | orchestrator | 2026-03-08 00:55:52.911354 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-03-08 00:55:52.911358 | orchestrator | Sunday 08 March 2026 00:55:39 +0000 (0:00:00.330) 0:10:52.775 ********** 2026-03-08 00:55:52.911363 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.911367 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.911372 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.911376 | orchestrator | 2026-03-08 00:55:52.911380 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-03-08 00:55:52.911385 | orchestrator | Sunday 08 March 2026 00:55:39 +0000 (0:00:00.349) 0:10:53.124 ********** 2026-03-08 00:55:52.911389 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 00:55:52.911393 | orchestrator | 2026-03-08 00:55:52.911398 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-03-08 00:55:52.911402 | orchestrator | Sunday 08 March 2026 00:55:40 +0000 (0:00:00.948) 0:10:54.073 ********** 2026-03-08 00:55:52.911407 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 00:55:52.911411 | orchestrator | 2026-03-08 00:55:52.911419 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-03-08 00:55:52.911423 | orchestrator | Sunday 08 March 2026 00:55:40 +0000 (0:00:00.571) 0:10:54.644 ********** 2026-03-08 00:55:52.911427 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:55:52.911432 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:55:52.911436 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:55:52.911440 | orchestrator | 2026-03-08 00:55:52.911445 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-03-08 00:55:52.911449 | orchestrator | Sunday 08 March 2026 00:55:42 +0000 (0:00:01.347) 0:10:55.992 ********** 2026-03-08 00:55:52.911454 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:55:52.911458 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:55:52.911462 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:55:52.911466 | orchestrator | 2026-03-08 00:55:52.911471 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-03-08 00:55:52.911475 | orchestrator | Sunday 08 March 2026 00:55:43 +0000 (0:00:01.675) 0:10:57.667 ********** 2026-03-08 00:55:52.911480 | orchestrator | changed: [testbed-node-3] 2026-03-08 00:55:52.911484 | orchestrator | changed: [testbed-node-4] 2026-03-08 00:55:52.911488 | orchestrator | changed: [testbed-node-5] 2026-03-08 00:55:52.911493 | orchestrator | 2026-03-08 00:55:52.911497 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-03-08 00:55:52.911505 | orchestrator | Sunday 08 March 2026 00:55:45 +0000 (0:00:01.983) 0:10:59.651 ********** 2026-03-08 00:55:52.911509 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-08 00:55:52.911514 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-08 00:55:52.911518 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-08 00:55:52.911523 | orchestrator | 2026-03-08 00:55:52.911528 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-08 00:55:52.911532 | orchestrator | Sunday 08 March 2026 00:55:48 +0000 (0:00:02.838) 0:11:02.489 ********** 2026-03-08 00:55:52.911537 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.911546 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.911550 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.911554 | orchestrator | 2026-03-08 00:55:52.911558 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-08 00:55:52.911562 | orchestrator | Sunday 08 March 2026 00:55:49 +0000 (0:00:00.425) 0:11:02.914 ********** 2026-03-08 00:55:52.911565 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 00:55:52.911569 | orchestrator | 2026-03-08 00:55:52.911573 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-03-08 00:55:52.911577 | orchestrator | Sunday 08 March 2026 00:55:49 +0000 (0:00:00.561) 0:11:03.476 ********** 2026-03-08 00:55:52.911581 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:55:52.911585 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:55:52.911588 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:55:52.911592 | orchestrator | 2026-03-08 00:55:52.911646 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-03-08 00:55:52.911650 | orchestrator | Sunday 08 March 2026 00:55:50 +0000 (0:00:00.611) 0:11:04.087 ********** 2026-03-08 00:55:52.911654 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.911658 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:55:52.911661 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:55:52.911665 | orchestrator | 2026-03-08 00:55:52.911669 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-03-08 00:55:52.911673 | orchestrator | Sunday 08 March 2026 00:55:50 +0000 (0:00:00.347) 0:11:04.434 ********** 2026-03-08 00:55:52.911677 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-08 00:55:52.911680 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-08 00:55:52.911684 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-08 00:55:52.911688 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:55:52.911691 | orchestrator | 2026-03-08 00:55:52.911695 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-03-08 00:55:52.911699 | orchestrator | Sunday 08 March 2026 00:55:51 +0000 (0:00:00.678) 0:11:05.112 ********** 2026-03-08 00:55:52.911703 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:55:52.911706 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:55:52.911710 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:55:52.911714 | orchestrator | 2026-03-08 00:55:52.911718 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:55:52.911721 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-03-08 00:55:52.911726 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-03-08 00:55:52.911730 | orchestrator | testbed-node-2 : ok=134  changed=34  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-03-08 00:55:52.911738 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-03-08 00:55:52.911742 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-03-08 00:55:52.911749 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-03-08 00:55:52.911753 | orchestrator | 2026-03-08 00:55:52.911757 | orchestrator | 2026-03-08 00:55:52.911761 | orchestrator | 2026-03-08 00:55:52.911765 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 00:55:52.911768 | orchestrator | Sunday 08 March 2026 00:55:51 +0000 (0:00:00.248) 0:11:05.360 ********** 2026-03-08 00:55:52.911772 | orchestrator | =============================================================================== 2026-03-08 00:55:52.911776 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 48.90s 2026-03-08 00:55:52.911780 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 38.70s 2026-03-08 00:55:52.911783 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 36.10s 2026-03-08 00:55:52.911787 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 31.23s 2026-03-08 00:55:52.911791 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 15.95s 2026-03-08 00:55:52.911794 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.59s 2026-03-08 00:55:52.911798 | orchestrator | ceph-mon : Fetch ceph initial keys ------------------------------------- 11.07s 2026-03-08 00:55:52.911802 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 11.01s 2026-03-08 00:55:52.911806 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.49s 2026-03-08 00:55:52.911809 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 7.00s 2026-03-08 00:55:52.911813 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.45s 2026-03-08 00:55:52.911817 | orchestrator | ceph-osd : Apply operating system tuning -------------------------------- 5.55s 2026-03-08 00:55:52.911821 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.10s 2026-03-08 00:55:52.911824 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.78s 2026-03-08 00:55:52.911828 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.16s 2026-03-08 00:55:52.911835 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 4.02s 2026-03-08 00:55:52.911839 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.95s 2026-03-08 00:55:52.911842 | orchestrator | ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created --- 3.93s 2026-03-08 00:55:52.911846 | orchestrator | ceph-osd : Unset noup flag ---------------------------------------------- 3.58s 2026-03-08 00:55:52.911850 | orchestrator | ceph-config : Generate Ceph file ---------------------------------------- 3.57s 2026-03-08 00:55:52.911854 | orchestrator | 2026-03-08 00:55:52 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:55:55.949611 | orchestrator | 2026-03-08 00:55:55 | INFO  | Task 8de0a801-165e-4157-b414-7d40e90108d3 is in state STARTED 2026-03-08 00:55:55.951322 | orchestrator | 2026-03-08 00:55:55 | INFO  | Task 65db8649-cd01-4f19-946a-a3ddfa88f72e is in state STARTED 2026-03-08 00:55:55.954501 | orchestrator | 2026-03-08 00:55:55 | INFO  | Task 42dada74-7697-47f6-b33d-6c63feba24e0 is in state STARTED 2026-03-08 00:55:55.954633 | orchestrator | 2026-03-08 00:55:55 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:55:59.015159 | orchestrator | 2026-03-08 00:55:59 | INFO  | Task 8de0a801-165e-4157-b414-7d40e90108d3 is in state STARTED 2026-03-08 00:55:59.015939 | orchestrator | 2026-03-08 00:55:59 | INFO  | Task 65db8649-cd01-4f19-946a-a3ddfa88f72e is in state STARTED 2026-03-08 00:55:59.017322 | orchestrator | 2026-03-08 00:55:59 | INFO  | Task 42dada74-7697-47f6-b33d-6c63feba24e0 is in state STARTED 2026-03-08 00:55:59.017367 | orchestrator | 2026-03-08 00:55:59 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:56:02.065742 | orchestrator | 2026-03-08 00:56:02 | INFO  | Task 8de0a801-165e-4157-b414-7d40e90108d3 is in state STARTED 2026-03-08 00:56:02.066442 | orchestrator | 2026-03-08 00:56:02 | INFO  | Task 65db8649-cd01-4f19-946a-a3ddfa88f72e is in state STARTED 2026-03-08 00:56:02.067992 | orchestrator | 2026-03-08 00:56:02 | INFO  | Task 42dada74-7697-47f6-b33d-6c63feba24e0 is in state STARTED 2026-03-08 00:56:02.068121 | orchestrator | 2026-03-08 00:56:02 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:56:05.116779 | orchestrator | 2026-03-08 00:56:05 | INFO  | Task 8de0a801-165e-4157-b414-7d40e90108d3 is in state STARTED 2026-03-08 00:56:05.117372 | orchestrator | 2026-03-08 00:56:05 | INFO  | Task 65db8649-cd01-4f19-946a-a3ddfa88f72e is in state STARTED 2026-03-08 00:56:05.119677 | orchestrator | 2026-03-08 00:56:05 | INFO  | Task 42dada74-7697-47f6-b33d-6c63feba24e0 is in state STARTED 2026-03-08 00:56:05.119737 | orchestrator | 2026-03-08 00:56:05 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:56:08.169324 | orchestrator | 2026-03-08 00:56:08 | INFO  | Task 8de0a801-165e-4157-b414-7d40e90108d3 is in state STARTED 2026-03-08 00:56:08.171763 | orchestrator | 2026-03-08 00:56:08 | INFO  | Task 65db8649-cd01-4f19-946a-a3ddfa88f72e is in state STARTED 2026-03-08 00:56:08.173697 | orchestrator | 2026-03-08 00:56:08 | INFO  | Task 42dada74-7697-47f6-b33d-6c63feba24e0 is in state STARTED 2026-03-08 00:56:08.173748 | orchestrator | 2026-03-08 00:56:08 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:56:11.211684 | orchestrator | 2026-03-08 00:56:11 | INFO  | Task 8de0a801-165e-4157-b414-7d40e90108d3 is in state STARTED 2026-03-08 00:56:11.213850 | orchestrator | 2026-03-08 00:56:11 | INFO  | Task 65db8649-cd01-4f19-946a-a3ddfa88f72e is in state STARTED 2026-03-08 00:56:11.216655 | orchestrator | 2026-03-08 00:56:11 | INFO  | Task 42dada74-7697-47f6-b33d-6c63feba24e0 is in state STARTED 2026-03-08 00:56:11.216722 | orchestrator | 2026-03-08 00:56:11 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:56:14.255697 | orchestrator | 2026-03-08 00:56:14 | INFO  | Task 8de0a801-165e-4157-b414-7d40e90108d3 is in state STARTED 2026-03-08 00:56:14.256761 | orchestrator | 2026-03-08 00:56:14 | INFO  | Task 65db8649-cd01-4f19-946a-a3ddfa88f72e is in state STARTED 2026-03-08 00:56:14.258568 | orchestrator | 2026-03-08 00:56:14 | INFO  | Task 42dada74-7697-47f6-b33d-6c63feba24e0 is in state STARTED 2026-03-08 00:56:14.258760 | orchestrator | 2026-03-08 00:56:14 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:56:17.290892 | orchestrator | 2026-03-08 00:56:17 | INFO  | Task 8de0a801-165e-4157-b414-7d40e90108d3 is in state STARTED 2026-03-08 00:56:17.291476 | orchestrator | 2026-03-08 00:56:17 | INFO  | Task 65db8649-cd01-4f19-946a-a3ddfa88f72e is in state STARTED 2026-03-08 00:56:17.292436 | orchestrator | 2026-03-08 00:56:17 | INFO  | Task 42dada74-7697-47f6-b33d-6c63feba24e0 is in state STARTED 2026-03-08 00:56:17.292487 | orchestrator | 2026-03-08 00:56:17 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:56:20.345235 | orchestrator | 2026-03-08 00:56:20 | INFO  | Task 8de0a801-165e-4157-b414-7d40e90108d3 is in state STARTED 2026-03-08 00:56:20.345330 | orchestrator | 2026-03-08 00:56:20 | INFO  | Task 65db8649-cd01-4f19-946a-a3ddfa88f72e is in state STARTED 2026-03-08 00:56:20.345338 | orchestrator | 2026-03-08 00:56:20 | INFO  | Task 42dada74-7697-47f6-b33d-6c63feba24e0 is in state STARTED 2026-03-08 00:56:20.345363 | orchestrator | 2026-03-08 00:56:20 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:56:23.403475 | orchestrator | 2026-03-08 00:56:23 | INFO  | Task 8de0a801-165e-4157-b414-7d40e90108d3 is in state STARTED 2026-03-08 00:56:23.405720 | orchestrator | 2026-03-08 00:56:23 | INFO  | Task 65db8649-cd01-4f19-946a-a3ddfa88f72e is in state STARTED 2026-03-08 00:56:23.407190 | orchestrator | 2026-03-08 00:56:23 | INFO  | Task 42dada74-7697-47f6-b33d-6c63feba24e0 is in state STARTED 2026-03-08 00:56:23.407307 | orchestrator | 2026-03-08 00:56:23 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:56:26.475536 | orchestrator | 2026-03-08 00:56:26 | INFO  | Task 8de0a801-165e-4157-b414-7d40e90108d3 is in state STARTED 2026-03-08 00:56:26.476308 | orchestrator | 2026-03-08 00:56:26 | INFO  | Task 65db8649-cd01-4f19-946a-a3ddfa88f72e is in state STARTED 2026-03-08 00:56:26.479020 | orchestrator | 2026-03-08 00:56:26.479091 | orchestrator | 2026-03-08 00:56:26 | INFO  | Task 42dada74-7697-47f6-b33d-6c63feba24e0 is in state SUCCESS 2026-03-08 00:56:26.480280 | orchestrator | 2026-03-08 00:56:26.480327 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-08 00:56:26.480335 | orchestrator | 2026-03-08 00:56:26.480341 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-08 00:56:26.480347 | orchestrator | Sunday 08 March 2026 00:53:46 +0000 (0:00:00.256) 0:00:00.256 ********** 2026-03-08 00:56:26.480353 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:56:26.480360 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:56:26.480364 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:56:26.480368 | orchestrator | 2026-03-08 00:56:26.480372 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-08 00:56:26.480376 | orchestrator | Sunday 08 March 2026 00:53:46 +0000 (0:00:00.306) 0:00:00.562 ********** 2026-03-08 00:56:26.480381 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-03-08 00:56:26.480385 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-03-08 00:56:26.480390 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-03-08 00:56:26.480393 | orchestrator | 2026-03-08 00:56:26.480397 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-03-08 00:56:26.480401 | orchestrator | 2026-03-08 00:56:26.480405 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-08 00:56:26.480408 | orchestrator | Sunday 08 March 2026 00:53:46 +0000 (0:00:00.477) 0:00:01.040 ********** 2026-03-08 00:56:26.480412 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:56:26.480416 | orchestrator | 2026-03-08 00:56:26.480420 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-03-08 00:56:26.480424 | orchestrator | Sunday 08 March 2026 00:53:47 +0000 (0:00:00.476) 0:00:01.517 ********** 2026-03-08 00:56:26.480427 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-08 00:56:26.480431 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-08 00:56:26.480435 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-08 00:56:26.480439 | orchestrator | 2026-03-08 00:56:26.480442 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-03-08 00:56:26.480446 | orchestrator | Sunday 08 March 2026 00:53:48 +0000 (0:00:00.711) 0:00:02.229 ********** 2026-03-08 00:56:26.480452 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-08 00:56:26.480494 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-08 00:56:26.480516 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-08 00:56:26.480526 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-08 00:56:26.480534 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-08 00:56:26.480551 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-08 00:56:26.480557 | orchestrator | 2026-03-08 00:56:26.480562 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-08 00:56:26.480569 | orchestrator | Sunday 08 March 2026 00:53:49 +0000 (0:00:01.779) 0:00:04.008 ********** 2026-03-08 00:56:26.480576 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:56:26.480582 | orchestrator | 2026-03-08 00:56:26.480587 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-03-08 00:56:26.480594 | orchestrator | Sunday 08 March 2026 00:53:50 +0000 (0:00:00.572) 0:00:04.580 ********** 2026-03-08 00:56:26.480605 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-08 00:56:26.480612 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-08 00:56:26.480619 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-08 00:56:26.480635 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-08 00:56:26.480643 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-08 00:56:26.480675 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-08 00:56:26.480684 | orchestrator | 2026-03-08 00:56:26.480690 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-03-08 00:56:26.480695 | orchestrator | Sunday 08 March 2026 00:53:53 +0000 (0:00:02.828) 0:00:07.409 ********** 2026-03-08 00:56:26.480706 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-08 00:56:26.480716 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-08 00:56:26.480723 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:26.480729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-08 00:56:26.480739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-08 00:56:26.480746 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:26.480752 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-08 00:56:26.480766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-08 00:56:26.480773 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:26.480780 | orchestrator | 2026-03-08 00:56:26.480870 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-03-08 00:56:26.480879 | orchestrator | Sunday 08 March 2026 00:53:54 +0000 (0:00:01.343) 0:00:08.752 ********** 2026-03-08 00:56:26.480887 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-08 00:56:26.480898 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-08 00:56:26.480906 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:26.480910 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-08 00:56:26.480915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-08 00:56:26.480922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-08 00:56:26.480931 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-08 00:56:26.480935 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:26.480939 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:26.480947 | orchestrator | 2026-03-08 00:56:26.480953 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-03-08 00:56:26.480958 | orchestrator | Sunday 08 March 2026 00:53:55 +0000 (0:00:01.069) 0:00:09.822 ********** 2026-03-08 00:56:26.480967 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-08 00:56:26.480975 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-08 00:56:26.480986 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-08 00:56:26.480998 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-08 00:56:26.481005 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-08 00:56:26.481021 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-08 00:56:26.481027 | orchestrator | 2026-03-08 00:56:26.481033 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-03-08 00:56:26.481040 | orchestrator | Sunday 08 March 2026 00:53:58 +0000 (0:00:02.936) 0:00:12.759 ********** 2026-03-08 00:56:26.481046 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:56:26.481051 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:56:26.481058 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:56:26.481063 | orchestrator | 2026-03-08 00:56:26.481073 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-03-08 00:56:26.481079 | orchestrator | Sunday 08 March 2026 00:54:02 +0000 (0:00:03.471) 0:00:16.231 ********** 2026-03-08 00:56:26.481085 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:56:26.481091 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:56:26.481096 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:56:26.481102 | orchestrator | 2026-03-08 00:56:26.481108 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2026-03-08 00:56:26.481112 | orchestrator | Sunday 08 March 2026 00:54:03 +0000 (0:00:01.572) 0:00:17.803 ********** 2026-03-08 00:56:26.481115 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-08 00:56:26.481124 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-08 00:56:26.481133 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-08 00:56:26.481137 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-08 00:56:26.481192 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-08 00:56:26.481208 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-08 00:56:26.481215 | orchestrator | 2026-03-08 00:56:26.481219 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-08 00:56:26.481223 | orchestrator | Sunday 08 March 2026 00:54:05 +0000 (0:00:01.868) 0:00:19.671 ********** 2026-03-08 00:56:26.481226 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:26.481230 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:26.481234 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:26.481238 | orchestrator | 2026-03-08 00:56:26.481241 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-08 00:56:26.481245 | orchestrator | Sunday 08 March 2026 00:54:05 +0000 (0:00:00.322) 0:00:19.994 ********** 2026-03-08 00:56:26.481249 | orchestrator | 2026-03-08 00:56:26.481253 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-08 00:56:26.481256 | orchestrator | Sunday 08 March 2026 00:54:05 +0000 (0:00:00.070) 0:00:20.064 ********** 2026-03-08 00:56:26.481260 | orchestrator | 2026-03-08 00:56:26.481264 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-08 00:56:26.481268 | orchestrator | Sunday 08 March 2026 00:54:06 +0000 (0:00:00.068) 0:00:20.132 ********** 2026-03-08 00:56:26.481271 | orchestrator | 2026-03-08 00:56:26.481275 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-03-08 00:56:26.481279 | orchestrator | Sunday 08 March 2026 00:54:06 +0000 (0:00:00.068) 0:00:20.201 ********** 2026-03-08 00:56:26.481282 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:26.481286 | orchestrator | 2026-03-08 00:56:26.481290 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-03-08 00:56:26.481294 | orchestrator | Sunday 08 March 2026 00:54:06 +0000 (0:00:00.676) 0:00:20.877 ********** 2026-03-08 00:56:26.481297 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:26.481301 | orchestrator | 2026-03-08 00:56:26.481305 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-03-08 00:56:26.481308 | orchestrator | Sunday 08 March 2026 00:54:06 +0000 (0:00:00.205) 0:00:21.083 ********** 2026-03-08 00:56:26.481312 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:56:26.481316 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:56:26.481319 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:56:26.481323 | orchestrator | 2026-03-08 00:56:26.481327 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-03-08 00:56:26.481330 | orchestrator | Sunday 08 March 2026 00:54:57 +0000 (0:00:50.456) 0:01:11.539 ********** 2026-03-08 00:56:26.481334 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:56:26.481338 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:56:26.481341 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:56:26.481345 | orchestrator | 2026-03-08 00:56:26.481349 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-08 00:56:26.481353 | orchestrator | Sunday 08 March 2026 00:56:09 +0000 (0:01:12.037) 0:02:23.577 ********** 2026-03-08 00:56:26.481357 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:56:26.481360 | orchestrator | 2026-03-08 00:56:26.481364 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-03-08 00:56:26.481374 | orchestrator | Sunday 08 March 2026 00:56:10 +0000 (0:00:00.679) 0:02:24.256 ********** 2026-03-08 00:56:26.481377 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:56:26.481381 | orchestrator | 2026-03-08 00:56:26.481385 | orchestrator | TASK [opensearch : Wait for OpenSearch cluster to become healthy] ************** 2026-03-08 00:56:26.481389 | orchestrator | Sunday 08 March 2026 00:56:12 +0000 (0:00:02.686) 0:02:26.943 ********** 2026-03-08 00:56:26.481393 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:56:26.481396 | orchestrator | 2026-03-08 00:56:26.481400 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-03-08 00:56:26.481404 | orchestrator | Sunday 08 March 2026 00:56:15 +0000 (0:00:02.472) 0:02:29.415 ********** 2026-03-08 00:56:26.481409 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:56:26.481415 | orchestrator | 2026-03-08 00:56:26.481421 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-03-08 00:56:26.481427 | orchestrator | Sunday 08 March 2026 00:56:17 +0000 (0:00:02.474) 0:02:31.890 ********** 2026-03-08 00:56:26.481432 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:56:26.481438 | orchestrator | 2026-03-08 00:56:26.481443 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-03-08 00:56:26.481450 | orchestrator | Sunday 08 March 2026 00:56:20 +0000 (0:00:03.156) 0:02:35.046 ********** 2026-03-08 00:56:26.481456 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:56:26.481462 | orchestrator | 2026-03-08 00:56:26.481466 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:56:26.481470 | orchestrator | testbed-node-0 : ok=19  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-08 00:56:26.481475 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-08 00:56:26.481483 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-08 00:56:26.481487 | orchestrator | 2026-03-08 00:56:26.481490 | orchestrator | 2026-03-08 00:56:26.481494 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 00:56:26.481498 | orchestrator | Sunday 08 March 2026 00:56:23 +0000 (0:00:02.876) 0:02:37.922 ********** 2026-03-08 00:56:26.481502 | orchestrator | =============================================================================== 2026-03-08 00:56:26.481505 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 72.04s 2026-03-08 00:56:26.481509 | orchestrator | opensearch : Restart opensearch container ------------------------------ 50.46s 2026-03-08 00:56:26.481513 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.47s 2026-03-08 00:56:26.481516 | orchestrator | opensearch : Create new log retention policy ---------------------------- 3.16s 2026-03-08 00:56:26.481520 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.94s 2026-03-08 00:56:26.481524 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.88s 2026-03-08 00:56:26.481527 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.83s 2026-03-08 00:56:26.481531 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.69s 2026-03-08 00:56:26.481535 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.47s 2026-03-08 00:56:26.481538 | orchestrator | opensearch : Wait for OpenSearch cluster to become healthy -------------- 2.47s 2026-03-08 00:56:26.481542 | orchestrator | opensearch : Check opensearch containers -------------------------------- 1.87s 2026-03-08 00:56:26.481546 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.78s 2026-03-08 00:56:26.481549 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.57s 2026-03-08 00:56:26.481553 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.34s 2026-03-08 00:56:26.481557 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.07s 2026-03-08 00:56:26.481564 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.71s 2026-03-08 00:56:26.481568 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.68s 2026-03-08 00:56:26.481572 | orchestrator | opensearch : Disable shard allocation ----------------------------------- 0.68s 2026-03-08 00:56:26.481575 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.57s 2026-03-08 00:56:26.481579 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.48s 2026-03-08 00:56:26.481583 | orchestrator | 2026-03-08 00:56:26 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:56:29.523589 | orchestrator | 2026-03-08 00:56:29 | INFO  | Task 8de0a801-165e-4157-b414-7d40e90108d3 is in state STARTED 2026-03-08 00:56:29.524006 | orchestrator | 2026-03-08 00:56:29 | INFO  | Task 65db8649-cd01-4f19-946a-a3ddfa88f72e is in state STARTED 2026-03-08 00:56:29.524065 | orchestrator | 2026-03-08 00:56:29 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:56:32.571085 | orchestrator | 2026-03-08 00:56:32 | INFO  | Task 8de0a801-165e-4157-b414-7d40e90108d3 is in state STARTED 2026-03-08 00:56:32.572289 | orchestrator | 2026-03-08 00:56:32 | INFO  | Task 65db8649-cd01-4f19-946a-a3ddfa88f72e is in state STARTED 2026-03-08 00:56:32.572365 | orchestrator | 2026-03-08 00:56:32 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:56:35.616286 | orchestrator | 2026-03-08 00:56:35 | INFO  | Task 8de0a801-165e-4157-b414-7d40e90108d3 is in state STARTED 2026-03-08 00:56:35.617343 | orchestrator | 2026-03-08 00:56:35 | INFO  | Task 65db8649-cd01-4f19-946a-a3ddfa88f72e is in state STARTED 2026-03-08 00:56:35.617380 | orchestrator | 2026-03-08 00:56:35 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:56:38.674193 | orchestrator | 2026-03-08 00:56:38 | INFO  | Task 8de0a801-165e-4157-b414-7d40e90108d3 is in state STARTED 2026-03-08 00:56:38.675636 | orchestrator | 2026-03-08 00:56:38 | INFO  | Task 65db8649-cd01-4f19-946a-a3ddfa88f72e is in state STARTED 2026-03-08 00:56:38.675702 | orchestrator | 2026-03-08 00:56:38 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:56:41.727696 | orchestrator | 2026-03-08 00:56:41 | INFO  | Task 8de0a801-165e-4157-b414-7d40e90108d3 is in state STARTED 2026-03-08 00:56:41.728161 | orchestrator | 2026-03-08 00:56:41 | INFO  | Task 65db8649-cd01-4f19-946a-a3ddfa88f72e is in state STARTED 2026-03-08 00:56:41.728184 | orchestrator | 2026-03-08 00:56:41 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:56:44.772403 | orchestrator | 2026-03-08 00:56:44 | INFO  | Task 8de0a801-165e-4157-b414-7d40e90108d3 is in state STARTED 2026-03-08 00:56:44.774999 | orchestrator | 2026-03-08 00:56:44 | INFO  | Task 65db8649-cd01-4f19-946a-a3ddfa88f72e is in state STARTED 2026-03-08 00:56:44.775074 | orchestrator | 2026-03-08 00:56:44 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:56:47.821600 | orchestrator | 2026-03-08 00:56:47 | INFO  | Task 8de0a801-165e-4157-b414-7d40e90108d3 is in state STARTED 2026-03-08 00:56:47.823289 | orchestrator | 2026-03-08 00:56:47 | INFO  | Task 65db8649-cd01-4f19-946a-a3ddfa88f72e is in state STARTED 2026-03-08 00:56:47.823341 | orchestrator | 2026-03-08 00:56:47 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:56:50.865768 | orchestrator | 2026-03-08 00:56:50 | INFO  | Task 8de0a801-165e-4157-b414-7d40e90108d3 is in state STARTED 2026-03-08 00:56:50.865861 | orchestrator | 2026-03-08 00:56:50 | INFO  | Task 65db8649-cd01-4f19-946a-a3ddfa88f72e is in state STARTED 2026-03-08 00:56:50.865890 | orchestrator | 2026-03-08 00:56:50 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:56:53.929180 | orchestrator | 2026-03-08 00:56:53 | INFO  | Task 8de0a801-165e-4157-b414-7d40e90108d3 is in state STARTED 2026-03-08 00:56:53.937798 | orchestrator | 2026-03-08 00:56:53.937951 | orchestrator | 2026-03-08 00:56:53 | INFO  | Task 65db8649-cd01-4f19-946a-a3ddfa88f72e is in state SUCCESS 2026-03-08 00:56:53.939160 | orchestrator | 2026-03-08 00:56:53.939190 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2026-03-08 00:56:53.939196 | orchestrator | 2026-03-08 00:56:53.939200 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-03-08 00:56:53.939204 | orchestrator | Sunday 08 March 2026 00:53:46 +0000 (0:00:00.097) 0:00:00.097 ********** 2026-03-08 00:56:53.939208 | orchestrator | ok: [localhost] => { 2026-03-08 00:56:53.939216 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2026-03-08 00:56:53.939223 | orchestrator | } 2026-03-08 00:56:53.939229 | orchestrator | 2026-03-08 00:56:53.939235 | orchestrator | TASK [Check MariaDB service] *************************************************** 2026-03-08 00:56:53.939241 | orchestrator | Sunday 08 March 2026 00:53:46 +0000 (0:00:00.050) 0:00:00.147 ********** 2026-03-08 00:56:53.939248 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2026-03-08 00:56:53.939256 | orchestrator | ...ignoring 2026-03-08 00:56:53.939264 | orchestrator | 2026-03-08 00:56:53.939271 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2026-03-08 00:56:53.939277 | orchestrator | Sunday 08 March 2026 00:53:49 +0000 (0:00:02.919) 0:00:03.067 ********** 2026-03-08 00:56:53.939283 | orchestrator | skipping: [localhost] 2026-03-08 00:56:53.939289 | orchestrator | 2026-03-08 00:56:53.939296 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2026-03-08 00:56:53.939426 | orchestrator | Sunday 08 March 2026 00:53:49 +0000 (0:00:00.062) 0:00:03.129 ********** 2026-03-08 00:56:53.939440 | orchestrator | ok: [localhost] 2026-03-08 00:56:53.939447 | orchestrator | 2026-03-08 00:56:53.939454 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-08 00:56:53.939460 | orchestrator | 2026-03-08 00:56:53.939466 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-08 00:56:53.939472 | orchestrator | Sunday 08 March 2026 00:53:49 +0000 (0:00:00.190) 0:00:03.320 ********** 2026-03-08 00:56:53.939479 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:56:53.939485 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:56:53.939491 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:56:53.939497 | orchestrator | 2026-03-08 00:56:53.939503 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-08 00:56:53.939509 | orchestrator | Sunday 08 March 2026 00:53:49 +0000 (0:00:00.325) 0:00:03.646 ********** 2026-03-08 00:56:53.939515 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-03-08 00:56:53.939522 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-03-08 00:56:53.939528 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-03-08 00:56:53.939533 | orchestrator | 2026-03-08 00:56:53.939539 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-03-08 00:56:53.939544 | orchestrator | 2026-03-08 00:56:53.939565 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-03-08 00:56:53.939572 | orchestrator | Sunday 08 March 2026 00:53:50 +0000 (0:00:00.650) 0:00:04.296 ********** 2026-03-08 00:56:53.939579 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-08 00:56:53.939586 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-08 00:56:53.939592 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-08 00:56:53.939598 | orchestrator | 2026-03-08 00:56:53.939604 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-08 00:56:53.939610 | orchestrator | Sunday 08 March 2026 00:53:50 +0000 (0:00:00.480) 0:00:04.777 ********** 2026-03-08 00:56:53.939636 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:56:53.939644 | orchestrator | 2026-03-08 00:56:53.939651 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-03-08 00:56:53.939657 | orchestrator | Sunday 08 March 2026 00:53:51 +0000 (0:00:00.624) 0:00:05.402 ********** 2026-03-08 00:56:53.939680 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-08 00:56:53.939694 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-08 00:56:53.939707 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-08 00:56:53.939714 | orchestrator | 2026-03-08 00:56:53.939794 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-03-08 00:56:53.939803 | orchestrator | Sunday 08 March 2026 00:53:54 +0000 (0:00:03.505) 0:00:08.907 ********** 2026-03-08 00:56:53.939810 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:56:53.939817 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:53.939823 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:53.939829 | orchestrator | 2026-03-08 00:56:53.939836 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-03-08 00:56:53.939842 | orchestrator | Sunday 08 March 2026 00:53:55 +0000 (0:00:00.944) 0:00:09.852 ********** 2026-03-08 00:56:53.939848 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:53.939854 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:53.939860 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:56:53.939867 | orchestrator | 2026-03-08 00:56:53.939873 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-03-08 00:56:53.939879 | orchestrator | Sunday 08 March 2026 00:53:57 +0000 (0:00:01.742) 0:00:11.595 ********** 2026-03-08 00:56:53.939889 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-08 00:56:53.939908 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-08 00:56:53.939919 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-08 00:56:53.939929 | orchestrator | 2026-03-08 00:56:53.939935 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-03-08 00:56:53.939942 | orchestrator | Sunday 08 March 2026 00:54:01 +0000 (0:00:04.308) 0:00:15.904 ********** 2026-03-08 00:56:53.939947 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:53.939953 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:53.939959 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:56:53.939965 | orchestrator | 2026-03-08 00:56:53.939971 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-03-08 00:56:53.939977 | orchestrator | Sunday 08 March 2026 00:54:02 +0000 (0:00:01.064) 0:00:16.968 ********** 2026-03-08 00:56:53.939982 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:56:53.939988 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:56:53.939994 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:56:53.939999 | orchestrator | 2026-03-08 00:56:53.940006 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-08 00:56:53.940012 | orchestrator | Sunday 08 March 2026 00:54:06 +0000 (0:00:03.810) 0:00:20.778 ********** 2026-03-08 00:56:53.940019 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:56:53.940025 | orchestrator | 2026-03-08 00:56:53.940031 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-08 00:56:53.940037 | orchestrator | Sunday 08 March 2026 00:54:07 +0000 (0:00:00.538) 0:00:21.317 ********** 2026-03-08 00:56:53.940049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-08 00:56:53.940057 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:53.940067 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-08 00:56:53.940079 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:53.940092 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-08 00:56:53.940099 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:53.940105 | orchestrator | 2026-03-08 00:56:53.940112 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-08 00:56:53.940119 | orchestrator | Sunday 08 March 2026 00:54:10 +0000 (0:00:03.274) 0:00:24.591 ********** 2026-03-08 00:56:53.940129 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-08 00:56:53.940141 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:53.940152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-08 00:56:53.940159 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:53.940169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-08 00:56:53.940185 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:53.940192 | orchestrator | 2026-03-08 00:56:53.940198 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-08 00:56:53.940204 | orchestrator | Sunday 08 March 2026 00:54:13 +0000 (0:00:03.362) 0:00:27.954 ********** 2026-03-08 00:56:53.940211 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-08 00:56:53.940218 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:53.940230 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-08 00:56:53.940242 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:53.940252 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-08 00:56:53.940259 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:53.940265 | orchestrator | 2026-03-08 00:56:53.940272 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2026-03-08 00:56:53.940278 | orchestrator | Sunday 08 March 2026 00:54:17 +0000 (0:00:03.298) 0:00:31.252 ********** 2026-03-08 00:56:53.940290 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-08 00:56:53.940304 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-08 00:56:53.940317 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-08 00:56:53.940329 | orchestrator | 2026-03-08 00:56:53.940336 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-03-08 00:56:53.940342 | orchestrator | Sunday 08 March 2026 00:54:21 +0000 (0:00:04.397) 0:00:35.650 ********** 2026-03-08 00:56:53.940348 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:56:53.940355 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:56:53.940361 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:56:53.940367 | orchestrator | 2026-03-08 00:56:53.940374 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-03-08 00:56:53.940380 | orchestrator | Sunday 08 March 2026 00:54:22 +0000 (0:00:00.912) 0:00:36.562 ********** 2026-03-08 00:56:53.940387 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:56:53.940394 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:56:53.940399 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:56:53.940406 | orchestrator | 2026-03-08 00:56:53.940411 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-03-08 00:56:53.940418 | orchestrator | Sunday 08 March 2026 00:54:23 +0000 (0:00:00.672) 0:00:37.235 ********** 2026-03-08 00:56:53.940424 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:56:53.940430 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:56:53.940437 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:56:53.940443 | orchestrator | 2026-03-08 00:56:53.940452 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-03-08 00:56:53.940459 | orchestrator | Sunday 08 March 2026 00:54:23 +0000 (0:00:00.349) 0:00:37.584 ********** 2026-03-08 00:56:53.940465 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-03-08 00:56:53.940472 | orchestrator | ...ignoring 2026-03-08 00:56:53.940478 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-03-08 00:56:53.940485 | orchestrator | ...ignoring 2026-03-08 00:56:53.940491 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-03-08 00:56:53.940498 | orchestrator | ...ignoring 2026-03-08 00:56:53.940504 | orchestrator | 2026-03-08 00:56:53.940510 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-03-08 00:56:53.940516 | orchestrator | Sunday 08 March 2026 00:54:34 +0000 (0:00:10.905) 0:00:48.490 ********** 2026-03-08 00:56:53.940522 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:56:53.940528 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:56:53.940534 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:56:53.940540 | orchestrator | 2026-03-08 00:56:53.940547 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-03-08 00:56:53.940553 | orchestrator | Sunday 08 March 2026 00:54:34 +0000 (0:00:00.453) 0:00:48.943 ********** 2026-03-08 00:56:53.940559 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:53.940565 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:53.940571 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:53.940577 | orchestrator | 2026-03-08 00:56:53.940584 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-03-08 00:56:53.940606 | orchestrator | Sunday 08 March 2026 00:54:35 +0000 (0:00:00.699) 0:00:49.643 ********** 2026-03-08 00:56:53.940612 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:53.940617 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:53.940623 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:53.940629 | orchestrator | 2026-03-08 00:56:53.940636 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-03-08 00:56:53.940642 | orchestrator | Sunday 08 March 2026 00:54:36 +0000 (0:00:00.445) 0:00:50.088 ********** 2026-03-08 00:56:53.940648 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:53.940655 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:53.940661 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:53.940667 | orchestrator | 2026-03-08 00:56:53.940673 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-03-08 00:56:53.940679 | orchestrator | Sunday 08 March 2026 00:54:36 +0000 (0:00:00.463) 0:00:50.552 ********** 2026-03-08 00:56:53.940685 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:56:53.940691 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:56:53.940697 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:56:53.940703 | orchestrator | 2026-03-08 00:56:53.940710 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-03-08 00:56:53.940716 | orchestrator | Sunday 08 March 2026 00:54:37 +0000 (0:00:00.506) 0:00:51.059 ********** 2026-03-08 00:56:53.940743 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:53.940750 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:53.940755 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:53.940761 | orchestrator | 2026-03-08 00:56:53.940767 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-08 00:56:53.940772 | orchestrator | Sunday 08 March 2026 00:54:37 +0000 (0:00:00.707) 0:00:51.767 ********** 2026-03-08 00:56:53.940779 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:53.940785 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:53.940791 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-03-08 00:56:53.940797 | orchestrator | 2026-03-08 00:56:53.940803 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-03-08 00:56:53.940809 | orchestrator | Sunday 08 March 2026 00:54:38 +0000 (0:00:00.379) 0:00:52.146 ********** 2026-03-08 00:56:53.940815 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:56:53.940821 | orchestrator | 2026-03-08 00:56:53.940827 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-03-08 00:56:53.940833 | orchestrator | Sunday 08 March 2026 00:54:48 +0000 (0:00:10.234) 0:01:02.380 ********** 2026-03-08 00:56:53.940839 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:56:53.940844 | orchestrator | 2026-03-08 00:56:53.940850 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-08 00:56:53.940856 | orchestrator | Sunday 08 March 2026 00:54:48 +0000 (0:00:00.128) 0:01:02.509 ********** 2026-03-08 00:56:53.940862 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:53.940867 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:53.940873 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:53.940879 | orchestrator | 2026-03-08 00:56:53.940885 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-03-08 00:56:53.940891 | orchestrator | Sunday 08 March 2026 00:54:49 +0000 (0:00:01.062) 0:01:03.572 ********** 2026-03-08 00:56:53.940898 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:56:53.940904 | orchestrator | 2026-03-08 00:56:53.940910 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-03-08 00:56:53.940916 | orchestrator | Sunday 08 March 2026 00:54:57 +0000 (0:00:08.010) 0:01:11.582 ********** 2026-03-08 00:56:53.940923 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:56:53.940929 | orchestrator | 2026-03-08 00:56:53.940935 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-03-08 00:56:53.940941 | orchestrator | Sunday 08 March 2026 00:55:00 +0000 (0:00:02.589) 0:01:14.172 ********** 2026-03-08 00:56:53.940953 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:56:53.940959 | orchestrator | 2026-03-08 00:56:53.940965 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-03-08 00:56:53.940971 | orchestrator | Sunday 08 March 2026 00:55:03 +0000 (0:00:03.711) 0:01:17.883 ********** 2026-03-08 00:56:53.940977 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:56:53.940983 | orchestrator | 2026-03-08 00:56:53.940994 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-03-08 00:56:53.941000 | orchestrator | Sunday 08 March 2026 00:55:04 +0000 (0:00:00.119) 0:01:18.003 ********** 2026-03-08 00:56:53.941006 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:53.941013 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:53.941019 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:53.941025 | orchestrator | 2026-03-08 00:56:53.941031 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-03-08 00:56:53.941038 | orchestrator | Sunday 08 March 2026 00:55:04 +0000 (0:00:00.312) 0:01:18.315 ********** 2026-03-08 00:56:53.941044 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:53.941050 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:56:53.941057 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:56:53.941062 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-03-08 00:56:53.941068 | orchestrator | 2026-03-08 00:56:53.941074 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-03-08 00:56:53.941080 | orchestrator | skipping: no hosts matched 2026-03-08 00:56:53.941086 | orchestrator | 2026-03-08 00:56:53.941092 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-08 00:56:53.941098 | orchestrator | 2026-03-08 00:56:53.941104 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-08 00:56:53.941110 | orchestrator | Sunday 08 March 2026 00:55:04 +0000 (0:00:00.647) 0:01:18.963 ********** 2026-03-08 00:56:53.941117 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:56:53.941123 | orchestrator | 2026-03-08 00:56:53.941129 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-08 00:56:53.941135 | orchestrator | Sunday 08 March 2026 00:55:28 +0000 (0:00:23.096) 0:01:42.059 ********** 2026-03-08 00:56:53.941141 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:56:53.941147 | orchestrator | 2026-03-08 00:56:53.941154 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-08 00:56:53.941160 | orchestrator | Sunday 08 March 2026 00:55:38 +0000 (0:00:10.534) 0:01:52.594 ********** 2026-03-08 00:56:53.941166 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:56:53.941172 | orchestrator | 2026-03-08 00:56:53.941178 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-08 00:56:53.941184 | orchestrator | 2026-03-08 00:56:53.941191 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-08 00:56:53.941197 | orchestrator | Sunday 08 March 2026 00:55:41 +0000 (0:00:02.692) 0:01:55.287 ********** 2026-03-08 00:56:53.941203 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:56:53.941210 | orchestrator | 2026-03-08 00:56:53.941216 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-08 00:56:53.941222 | orchestrator | Sunday 08 March 2026 00:56:00 +0000 (0:00:19.144) 0:02:14.432 ********** 2026-03-08 00:56:53.941229 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:56:53.941235 | orchestrator | 2026-03-08 00:56:53.941241 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-08 00:56:53.941247 | orchestrator | Sunday 08 March 2026 00:56:17 +0000 (0:00:16.585) 0:02:31.017 ********** 2026-03-08 00:56:53.941253 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:56:53.941259 | orchestrator | 2026-03-08 00:56:53.941265 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-03-08 00:56:53.941271 | orchestrator | 2026-03-08 00:56:53.941282 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-08 00:56:53.941288 | orchestrator | Sunday 08 March 2026 00:56:19 +0000 (0:00:02.611) 0:02:33.629 ********** 2026-03-08 00:56:53.941299 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:56:53.941305 | orchestrator | 2026-03-08 00:56:53.941311 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-08 00:56:53.941317 | orchestrator | Sunday 08 March 2026 00:56:31 +0000 (0:00:12.245) 0:02:45.874 ********** 2026-03-08 00:56:53.941324 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:56:53.941330 | orchestrator | 2026-03-08 00:56:53.941336 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-08 00:56:53.941352 | orchestrator | Sunday 08 March 2026 00:56:36 +0000 (0:00:04.597) 0:02:50.471 ********** 2026-03-08 00:56:53.941359 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:56:53.941364 | orchestrator | 2026-03-08 00:56:53.941375 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-03-08 00:56:53.941386 | orchestrator | 2026-03-08 00:56:53.941390 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-03-08 00:56:53.941394 | orchestrator | Sunday 08 March 2026 00:56:39 +0000 (0:00:02.796) 0:02:53.268 ********** 2026-03-08 00:56:53.941398 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:56:53.941402 | orchestrator | 2026-03-08 00:56:53.941406 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-03-08 00:56:53.941412 | orchestrator | Sunday 08 March 2026 00:56:39 +0000 (0:00:00.511) 0:02:53.780 ********** 2026-03-08 00:56:53.941418 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:53.941425 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:53.941431 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:56:53.941437 | orchestrator | 2026-03-08 00:56:53.941444 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-03-08 00:56:53.941451 | orchestrator | Sunday 08 March 2026 00:56:42 +0000 (0:00:02.696) 0:02:56.476 ********** 2026-03-08 00:56:53.941455 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:53.941459 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:53.941462 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:56:53.941466 | orchestrator | 2026-03-08 00:56:53.941470 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-03-08 00:56:53.941474 | orchestrator | Sunday 08 March 2026 00:56:45 +0000 (0:00:02.534) 0:02:59.011 ********** 2026-03-08 00:56:53.941477 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:53.941482 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:53.941488 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:56:53.941495 | orchestrator | 2026-03-08 00:56:53.941501 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-03-08 00:56:53.941507 | orchestrator | Sunday 08 March 2026 00:56:47 +0000 (0:00:02.435) 0:03:01.446 ********** 2026-03-08 00:56:53.941517 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:53.941524 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:53.941529 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:56:53.941536 | orchestrator | 2026-03-08 00:56:53.941542 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-03-08 00:56:53.941547 | orchestrator | Sunday 08 March 2026 00:56:49 +0000 (0:00:02.431) 0:03:03.877 ********** 2026-03-08 00:56:53.941553 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:56:53.941559 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:56:53.941564 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:56:53.941570 | orchestrator | 2026-03-08 00:56:53.941576 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-03-08 00:56:53.941582 | orchestrator | Sunday 08 March 2026 00:56:53 +0000 (0:00:03.245) 0:03:07.122 ********** 2026-03-08 00:56:53.941588 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:56:53.941595 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:56:53.941601 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:56:53.941607 | orchestrator | 2026-03-08 00:56:53.941613 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:56:53.941621 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-03-08 00:56:53.941626 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2026-03-08 00:56:53.941633 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-03-08 00:56:53.941639 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-03-08 00:56:53.941645 | orchestrator | 2026-03-08 00:56:53.941651 | orchestrator | 2026-03-08 00:56:53.941657 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 00:56:53.941664 | orchestrator | Sunday 08 March 2026 00:56:53 +0000 (0:00:00.242) 0:03:07.365 ********** 2026-03-08 00:56:53.941670 | orchestrator | =============================================================================== 2026-03-08 00:56:53.941675 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 42.24s 2026-03-08 00:56:53.941681 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 27.12s 2026-03-08 00:56:53.941688 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 12.25s 2026-03-08 00:56:53.941693 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.91s 2026-03-08 00:56:53.941700 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.23s 2026-03-08 00:56:53.941706 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 8.01s 2026-03-08 00:56:53.941716 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.30s 2026-03-08 00:56:53.941738 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.60s 2026-03-08 00:56:53.941743 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 4.40s 2026-03-08 00:56:53.941747 | orchestrator | mariadb : Copying over config.json files for services ------------------- 4.31s 2026-03-08 00:56:53.941750 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 3.81s 2026-03-08 00:56:53.941754 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 3.71s 2026-03-08 00:56:53.941758 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.51s 2026-03-08 00:56:53.941762 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 3.36s 2026-03-08 00:56:53.941765 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 3.30s 2026-03-08 00:56:53.941769 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.27s 2026-03-08 00:56:53.941773 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.25s 2026-03-08 00:56:53.941777 | orchestrator | Check MariaDB service --------------------------------------------------- 2.92s 2026-03-08 00:56:53.941780 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.80s 2026-03-08 00:56:53.941784 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.70s 2026-03-08 00:56:53.941788 | orchestrator | 2026-03-08 00:56:53 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:56:56.987079 | orchestrator | 2026-03-08 00:56:56 | INFO  | Task f6f0ca12-a956-4870-8336-bfc7ab47c4a9 is in state STARTED 2026-03-08 00:56:56.990615 | orchestrator | 2026-03-08 00:56:56 | INFO  | Task 8de0a801-165e-4157-b414-7d40e90108d3 is in state STARTED 2026-03-08 00:56:56.992939 | orchestrator | 2026-03-08 00:56:56 | INFO  | Task 3f42d72f-124d-4ed7-b37f-8af5d1d153a6 is in state STARTED 2026-03-08 00:56:56.993220 | orchestrator | 2026-03-08 00:56:56 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:57:00.045966 | orchestrator | 2026-03-08 00:57:00 | INFO  | Task f6f0ca12-a956-4870-8336-bfc7ab47c4a9 is in state STARTED 2026-03-08 00:57:00.047697 | orchestrator | 2026-03-08 00:57:00 | INFO  | Task 8de0a801-165e-4157-b414-7d40e90108d3 is in state STARTED 2026-03-08 00:57:00.049462 | orchestrator | 2026-03-08 00:57:00 | INFO  | Task 3f42d72f-124d-4ed7-b37f-8af5d1d153a6 is in state STARTED 2026-03-08 00:57:00.049649 | orchestrator | 2026-03-08 00:57:00 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:57:03.080229 | orchestrator | 2026-03-08 00:57:03 | INFO  | Task f6f0ca12-a956-4870-8336-bfc7ab47c4a9 is in state STARTED 2026-03-08 00:57:03.080826 | orchestrator | 2026-03-08 00:57:03 | INFO  | Task 8de0a801-165e-4157-b414-7d40e90108d3 is in state STARTED 2026-03-08 00:57:03.082153 | orchestrator | 2026-03-08 00:57:03 | INFO  | Task 3f42d72f-124d-4ed7-b37f-8af5d1d153a6 is in state STARTED 2026-03-08 00:57:03.082205 | orchestrator | 2026-03-08 00:57:03 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:57:06.120487 | orchestrator | 2026-03-08 00:57:06 | INFO  | Task f6f0ca12-a956-4870-8336-bfc7ab47c4a9 is in state STARTED 2026-03-08 00:57:06.122794 | orchestrator | 2026-03-08 00:57:06 | INFO  | Task 8de0a801-165e-4157-b414-7d40e90108d3 is in state STARTED 2026-03-08 00:57:06.127925 | orchestrator | 2026-03-08 00:57:06 | INFO  | Task 3f42d72f-124d-4ed7-b37f-8af5d1d153a6 is in state STARTED 2026-03-08 00:57:06.127990 | orchestrator | 2026-03-08 00:57:06 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:57:09.187230 | orchestrator | 2026-03-08 00:57:09 | INFO  | Task f6f0ca12-a956-4870-8336-bfc7ab47c4a9 is in state STARTED 2026-03-08 00:57:09.187947 | orchestrator | 2026-03-08 00:57:09 | INFO  | Task 8de0a801-165e-4157-b414-7d40e90108d3 is in state STARTED 2026-03-08 00:57:09.191191 | orchestrator | 2026-03-08 00:57:09 | INFO  | Task 3f42d72f-124d-4ed7-b37f-8af5d1d153a6 is in state STARTED 2026-03-08 00:57:09.191265 | orchestrator | 2026-03-08 00:57:09 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:57:12.238116 | orchestrator | 2026-03-08 00:57:12 | INFO  | Task f6f0ca12-a956-4870-8336-bfc7ab47c4a9 is in state STARTED 2026-03-08 00:57:12.241122 | orchestrator | 2026-03-08 00:57:12 | INFO  | Task 8de0a801-165e-4157-b414-7d40e90108d3 is in state STARTED 2026-03-08 00:57:12.242979 | orchestrator | 2026-03-08 00:57:12 | INFO  | Task 3f42d72f-124d-4ed7-b37f-8af5d1d153a6 is in state STARTED 2026-03-08 00:57:12.243134 | orchestrator | 2026-03-08 00:57:12 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:57:15.274771 | orchestrator | 2026-03-08 00:57:15 | INFO  | Task f6f0ca12-a956-4870-8336-bfc7ab47c4a9 is in state STARTED 2026-03-08 00:57:15.275428 | orchestrator | 2026-03-08 00:57:15 | INFO  | Task 8de0a801-165e-4157-b414-7d40e90108d3 is in state STARTED 2026-03-08 00:57:15.275988 | orchestrator | 2026-03-08 00:57:15 | INFO  | Task 3f42d72f-124d-4ed7-b37f-8af5d1d153a6 is in state STARTED 2026-03-08 00:57:15.276555 | orchestrator | 2026-03-08 00:57:15 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:57:18.315781 | orchestrator | 2026-03-08 00:57:18 | INFO  | Task f6f0ca12-a956-4870-8336-bfc7ab47c4a9 is in state STARTED 2026-03-08 00:57:18.315870 | orchestrator | 2026-03-08 00:57:18 | INFO  | Task 8de0a801-165e-4157-b414-7d40e90108d3 is in state STARTED 2026-03-08 00:57:18.316878 | orchestrator | 2026-03-08 00:57:18 | INFO  | Task 3f42d72f-124d-4ed7-b37f-8af5d1d153a6 is in state STARTED 2026-03-08 00:57:18.316936 | orchestrator | 2026-03-08 00:57:18 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:57:21.358892 | orchestrator | 2026-03-08 00:57:21 | INFO  | Task f6f0ca12-a956-4870-8336-bfc7ab47c4a9 is in state STARTED 2026-03-08 00:57:21.359139 | orchestrator | 2026-03-08 00:57:21 | INFO  | Task 8de0a801-165e-4157-b414-7d40e90108d3 is in state STARTED 2026-03-08 00:57:21.359926 | orchestrator | 2026-03-08 00:57:21 | INFO  | Task 3f42d72f-124d-4ed7-b37f-8af5d1d153a6 is in state STARTED 2026-03-08 00:57:21.359976 | orchestrator | 2026-03-08 00:57:21 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:57:24.403953 | orchestrator | 2026-03-08 00:57:24 | INFO  | Task f6f0ca12-a956-4870-8336-bfc7ab47c4a9 is in state STARTED 2026-03-08 00:57:24.406000 | orchestrator | 2026-03-08 00:57:24 | INFO  | Task 8de0a801-165e-4157-b414-7d40e90108d3 is in state STARTED 2026-03-08 00:57:24.406882 | orchestrator | 2026-03-08 00:57:24 | INFO  | Task 3f42d72f-124d-4ed7-b37f-8af5d1d153a6 is in state STARTED 2026-03-08 00:57:24.406915 | orchestrator | 2026-03-08 00:57:24 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:57:27.448302 | orchestrator | 2026-03-08 00:57:27 | INFO  | Task f6f0ca12-a956-4870-8336-bfc7ab47c4a9 is in state STARTED 2026-03-08 00:57:27.448744 | orchestrator | 2026-03-08 00:57:27 | INFO  | Task 8de0a801-165e-4157-b414-7d40e90108d3 is in state STARTED 2026-03-08 00:57:27.449712 | orchestrator | 2026-03-08 00:57:27 | INFO  | Task 3f42d72f-124d-4ed7-b37f-8af5d1d153a6 is in state STARTED 2026-03-08 00:57:27.449775 | orchestrator | 2026-03-08 00:57:27 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:57:30.480103 | orchestrator | 2026-03-08 00:57:30 | INFO  | Task f6f0ca12-a956-4870-8336-bfc7ab47c4a9 is in state STARTED 2026-03-08 00:57:30.481296 | orchestrator | 2026-03-08 00:57:30 | INFO  | Task 8de0a801-165e-4157-b414-7d40e90108d3 is in state STARTED 2026-03-08 00:57:30.482068 | orchestrator | 2026-03-08 00:57:30 | INFO  | Task 3f42d72f-124d-4ed7-b37f-8af5d1d153a6 is in state STARTED 2026-03-08 00:57:30.482176 | orchestrator | 2026-03-08 00:57:30 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:57:33.516627 | orchestrator | 2026-03-08 00:57:33 | INFO  | Task f6f0ca12-a956-4870-8336-bfc7ab47c4a9 is in state STARTED 2026-03-08 00:57:33.517853 | orchestrator | 2026-03-08 00:57:33 | INFO  | Task 8de0a801-165e-4157-b414-7d40e90108d3 is in state STARTED 2026-03-08 00:57:33.519567 | orchestrator | 2026-03-08 00:57:33 | INFO  | Task 3f42d72f-124d-4ed7-b37f-8af5d1d153a6 is in state STARTED 2026-03-08 00:57:33.519739 | orchestrator | 2026-03-08 00:57:33 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:57:36.557268 | orchestrator | 2026-03-08 00:57:36 | INFO  | Task f6f0ca12-a956-4870-8336-bfc7ab47c4a9 is in state STARTED 2026-03-08 00:57:36.558247 | orchestrator | 2026-03-08 00:57:36 | INFO  | Task 8de0a801-165e-4157-b414-7d40e90108d3 is in state STARTED 2026-03-08 00:57:36.559518 | orchestrator | 2026-03-08 00:57:36 | INFO  | Task 3f42d72f-124d-4ed7-b37f-8af5d1d153a6 is in state STARTED 2026-03-08 00:57:36.559562 | orchestrator | 2026-03-08 00:57:36 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:57:39.610276 | orchestrator | 2026-03-08 00:57:39 | INFO  | Task f6f0ca12-a956-4870-8336-bfc7ab47c4a9 is in state STARTED 2026-03-08 00:57:39.611898 | orchestrator | 2026-03-08 00:57:39 | INFO  | Task 8de0a801-165e-4157-b414-7d40e90108d3 is in state STARTED 2026-03-08 00:57:39.613771 | orchestrator | 2026-03-08 00:57:39 | INFO  | Task 3f42d72f-124d-4ed7-b37f-8af5d1d153a6 is in state STARTED 2026-03-08 00:57:39.613827 | orchestrator | 2026-03-08 00:57:39 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:57:42.662187 | orchestrator | 2026-03-08 00:57:42 | INFO  | Task f6f0ca12-a956-4870-8336-bfc7ab47c4a9 is in state STARTED 2026-03-08 00:57:42.664323 | orchestrator | 2026-03-08 00:57:42 | INFO  | Task 8de0a801-165e-4157-b414-7d40e90108d3 is in state STARTED 2026-03-08 00:57:42.666897 | orchestrator | 2026-03-08 00:57:42 | INFO  | Task 3f42d72f-124d-4ed7-b37f-8af5d1d153a6 is in state STARTED 2026-03-08 00:57:42.668284 | orchestrator | 2026-03-08 00:57:42 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:57:45.712312 | orchestrator | 2026-03-08 00:57:45 | INFO  | Task f6f0ca12-a956-4870-8336-bfc7ab47c4a9 is in state STARTED 2026-03-08 00:57:45.713846 | orchestrator | 2026-03-08 00:57:45 | INFO  | Task 8de0a801-165e-4157-b414-7d40e90108d3 is in state STARTED 2026-03-08 00:57:45.715364 | orchestrator | 2026-03-08 00:57:45 | INFO  | Task 3f42d72f-124d-4ed7-b37f-8af5d1d153a6 is in state STARTED 2026-03-08 00:57:45.715501 | orchestrator | 2026-03-08 00:57:45 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:57:48.758181 | orchestrator | 2026-03-08 00:57:48 | INFO  | Task f6f0ca12-a956-4870-8336-bfc7ab47c4a9 is in state STARTED 2026-03-08 00:57:48.759380 | orchestrator | 2026-03-08 00:57:48 | INFO  | Task 8de0a801-165e-4157-b414-7d40e90108d3 is in state STARTED 2026-03-08 00:57:48.760819 | orchestrator | 2026-03-08 00:57:48 | INFO  | Task 3f42d72f-124d-4ed7-b37f-8af5d1d153a6 is in state STARTED 2026-03-08 00:57:48.760869 | orchestrator | 2026-03-08 00:57:48 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:57:51.806246 | orchestrator | 2026-03-08 00:57:51 | INFO  | Task f6f0ca12-a956-4870-8336-bfc7ab47c4a9 is in state STARTED 2026-03-08 00:57:51.808009 | orchestrator | 2026-03-08 00:57:51 | INFO  | Task 8de0a801-165e-4157-b414-7d40e90108d3 is in state STARTED 2026-03-08 00:57:51.810184 | orchestrator | 2026-03-08 00:57:51 | INFO  | Task 3f42d72f-124d-4ed7-b37f-8af5d1d153a6 is in state STARTED 2026-03-08 00:57:51.810250 | orchestrator | 2026-03-08 00:57:51 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:57:54.862924 | orchestrator | 2026-03-08 00:57:54 | INFO  | Task f6f0ca12-a956-4870-8336-bfc7ab47c4a9 is in state STARTED 2026-03-08 00:57:54.864951 | orchestrator | 2026-03-08 00:57:54 | INFO  | Task 8de0a801-165e-4157-b414-7d40e90108d3 is in state STARTED 2026-03-08 00:57:54.866455 | orchestrator | 2026-03-08 00:57:54 | INFO  | Task 3f42d72f-124d-4ed7-b37f-8af5d1d153a6 is in state STARTED 2026-03-08 00:57:54.866497 | orchestrator | 2026-03-08 00:57:54 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:57:57.916133 | orchestrator | 2026-03-08 00:57:57 | INFO  | Task f6f0ca12-a956-4870-8336-bfc7ab47c4a9 is in state STARTED 2026-03-08 00:57:57.917255 | orchestrator | 2026-03-08 00:57:57 | INFO  | Task 8de0a801-165e-4157-b414-7d40e90108d3 is in state STARTED 2026-03-08 00:57:57.919922 | orchestrator | 2026-03-08 00:57:57 | INFO  | Task 3f42d72f-124d-4ed7-b37f-8af5d1d153a6 is in state STARTED 2026-03-08 00:57:57.919991 | orchestrator | 2026-03-08 00:57:57 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:58:00.969678 | orchestrator | 2026-03-08 00:58:00 | INFO  | Task f6f0ca12-a956-4870-8336-bfc7ab47c4a9 is in state STARTED 2026-03-08 00:58:00.972573 | orchestrator | 2026-03-08 00:58:00 | INFO  | Task 8de0a801-165e-4157-b414-7d40e90108d3 is in state STARTED 2026-03-08 00:58:00.974193 | orchestrator | 2026-03-08 00:58:00 | INFO  | Task 3f42d72f-124d-4ed7-b37f-8af5d1d153a6 is in state STARTED 2026-03-08 00:58:00.974246 | orchestrator | 2026-03-08 00:58:00 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:58:04.024082 | orchestrator | 2026-03-08 00:58:04 | INFO  | Task f6f0ca12-a956-4870-8336-bfc7ab47c4a9 is in state STARTED 2026-03-08 00:58:04.024498 | orchestrator | 2026-03-08 00:58:04 | INFO  | Task 8de0a801-165e-4157-b414-7d40e90108d3 is in state STARTED 2026-03-08 00:58:04.025982 | orchestrator | 2026-03-08 00:58:04 | INFO  | Task 3f42d72f-124d-4ed7-b37f-8af5d1d153a6 is in state STARTED 2026-03-08 00:58:04.026070 | orchestrator | 2026-03-08 00:58:04 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:58:07.081678 | orchestrator | 2026-03-08 00:58:07 | INFO  | Task f6f0ca12-a956-4870-8336-bfc7ab47c4a9 is in state STARTED 2026-03-08 00:58:07.083168 | orchestrator | 2026-03-08 00:58:07 | INFO  | Task 8de0a801-165e-4157-b414-7d40e90108d3 is in state STARTED 2026-03-08 00:58:07.085323 | orchestrator | 2026-03-08 00:58:07 | INFO  | Task 3f42d72f-124d-4ed7-b37f-8af5d1d153a6 is in state STARTED 2026-03-08 00:58:07.085381 | orchestrator | 2026-03-08 00:58:07 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:58:10.145429 | orchestrator | 2026-03-08 00:58:10 | INFO  | Task f6f0ca12-a956-4870-8336-bfc7ab47c4a9 is in state STARTED 2026-03-08 00:58:10.149298 | orchestrator | 2026-03-08 00:58:10 | INFO  | Task 8de0a801-165e-4157-b414-7d40e90108d3 is in state SUCCESS 2026-03-08 00:58:10.151387 | orchestrator | 2026-03-08 00:58:10.151431 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-08 00:58:10.151438 | orchestrator | 2.16.14 2026-03-08 00:58:10.151444 | orchestrator | 2026-03-08 00:58:10.151449 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-03-08 00:58:10.151455 | orchestrator | 2026-03-08 00:58:10.151460 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-08 00:58:10.151465 | orchestrator | Sunday 08 March 2026 00:55:56 +0000 (0:00:00.580) 0:00:00.580 ********** 2026-03-08 00:58:10.151471 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 00:58:10.151476 | orchestrator | 2026-03-08 00:58:10.151481 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-08 00:58:10.151486 | orchestrator | Sunday 08 March 2026 00:55:57 +0000 (0:00:00.628) 0:00:01.209 ********** 2026-03-08 00:58:10.151492 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:58:10.151497 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:58:10.151502 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:58:10.151507 | orchestrator | 2026-03-08 00:58:10.151512 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-08 00:58:10.151517 | orchestrator | Sunday 08 March 2026 00:55:58 +0000 (0:00:00.645) 0:00:01.854 ********** 2026-03-08 00:58:10.151522 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:58:10.151527 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:58:10.151532 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:58:10.151537 | orchestrator | 2026-03-08 00:58:10.151542 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-08 00:58:10.151547 | orchestrator | Sunday 08 March 2026 00:55:58 +0000 (0:00:00.285) 0:00:02.139 ********** 2026-03-08 00:58:10.151552 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:58:10.151557 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:58:10.151562 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:58:10.151648 | orchestrator | 2026-03-08 00:58:10.151655 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-08 00:58:10.151660 | orchestrator | Sunday 08 March 2026 00:55:59 +0000 (0:00:00.766) 0:00:02.906 ********** 2026-03-08 00:58:10.151665 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:58:10.151670 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:58:10.151676 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:58:10.151681 | orchestrator | 2026-03-08 00:58:10.151686 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-08 00:58:10.151691 | orchestrator | Sunday 08 March 2026 00:55:59 +0000 (0:00:00.273) 0:00:03.180 ********** 2026-03-08 00:58:10.151697 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:58:10.151702 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:58:10.151707 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:58:10.151713 | orchestrator | 2026-03-08 00:58:10.151718 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-08 00:58:10.151814 | orchestrator | Sunday 08 March 2026 00:55:59 +0000 (0:00:00.286) 0:00:03.467 ********** 2026-03-08 00:58:10.151976 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:58:10.151984 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:58:10.151989 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:58:10.151995 | orchestrator | 2026-03-08 00:58:10.152000 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-08 00:58:10.152028 | orchestrator | Sunday 08 March 2026 00:56:00 +0000 (0:00:00.365) 0:00:03.832 ********** 2026-03-08 00:58:10.152034 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:58:10.152067 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:58:10.152072 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:58:10.152078 | orchestrator | 2026-03-08 00:58:10.152083 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-08 00:58:10.152088 | orchestrator | Sunday 08 March 2026 00:56:00 +0000 (0:00:00.448) 0:00:04.280 ********** 2026-03-08 00:58:10.152093 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:58:10.152098 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:58:10.152103 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:58:10.152108 | orchestrator | 2026-03-08 00:58:10.152113 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-08 00:58:10.152119 | orchestrator | Sunday 08 March 2026 00:56:00 +0000 (0:00:00.290) 0:00:04.571 ********** 2026-03-08 00:58:10.152124 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-08 00:58:10.152129 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-08 00:58:10.152134 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-08 00:58:10.152139 | orchestrator | 2026-03-08 00:58:10.152144 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-08 00:58:10.152149 | orchestrator | Sunday 08 March 2026 00:56:01 +0000 (0:00:00.701) 0:00:05.272 ********** 2026-03-08 00:58:10.152154 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:58:10.152159 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:58:10.152165 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:58:10.152170 | orchestrator | 2026-03-08 00:58:10.152175 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-08 00:58:10.152181 | orchestrator | Sunday 08 March 2026 00:56:01 +0000 (0:00:00.528) 0:00:05.800 ********** 2026-03-08 00:58:10.152190 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-08 00:58:10.152199 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-08 00:58:10.152212 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-08 00:58:10.152436 | orchestrator | 2026-03-08 00:58:10.152443 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-08 00:58:10.152448 | orchestrator | Sunday 08 March 2026 00:56:04 +0000 (0:00:02.274) 0:00:08.075 ********** 2026-03-08 00:58:10.152453 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-08 00:58:10.152459 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-08 00:58:10.152464 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-08 00:58:10.152469 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:58:10.152474 | orchestrator | 2026-03-08 00:58:10.152501 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-08 00:58:10.152507 | orchestrator | Sunday 08 March 2026 00:56:04 +0000 (0:00:00.705) 0:00:08.780 ********** 2026-03-08 00:58:10.152514 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-08 00:58:10.152520 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-08 00:58:10.152533 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-08 00:58:10.152538 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:58:10.152543 | orchestrator | 2026-03-08 00:58:10.152548 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-08 00:58:10.152553 | orchestrator | Sunday 08 March 2026 00:56:05 +0000 (0:00:00.890) 0:00:09.671 ********** 2026-03-08 00:58:10.152560 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:10.152592 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:10.152615 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:10.152621 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:58:10.152626 | orchestrator | 2026-03-08 00:58:10.152632 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-08 00:58:10.152637 | orchestrator | Sunday 08 March 2026 00:56:06 +0000 (0:00:00.410) 0:00:10.082 ********** 2026-03-08 00:58:10.152643 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '4205b976b985', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-08 00:56:02.682610', 'end': '2026-03-08 00:56:02.732424', 'delta': '0:00:00.049814', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['4205b976b985'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-08 00:58:10.152650 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'd21404f8fa64', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-08 00:56:03.511703', 'end': '2026-03-08 00:56:03.554733', 'delta': '0:00:00.043030', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d21404f8fa64'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-08 00:58:10.152684 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '3381a5f0e5b8', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-08 00:56:04.062313', 'end': '2026-03-08 00:56:04.096417', 'delta': '0:00:00.034104', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['3381a5f0e5b8'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-08 00:58:10.152696 | orchestrator | 2026-03-08 00:58:10.152701 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-08 00:58:10.152706 | orchestrator | Sunday 08 March 2026 00:56:06 +0000 (0:00:00.210) 0:00:10.293 ********** 2026-03-08 00:58:10.152711 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:58:10.152716 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:58:10.152721 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:58:10.152726 | orchestrator | 2026-03-08 00:58:10.152732 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-08 00:58:10.152737 | orchestrator | Sunday 08 March 2026 00:56:06 +0000 (0:00:00.442) 0:00:10.735 ********** 2026-03-08 00:58:10.152742 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-03-08 00:58:10.152747 | orchestrator | 2026-03-08 00:58:10.152752 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-08 00:58:10.152760 | orchestrator | Sunday 08 March 2026 00:56:08 +0000 (0:00:01.879) 0:00:12.614 ********** 2026-03-08 00:58:10.152771 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:58:10.152783 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:58:10.152792 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:58:10.152800 | orchestrator | 2026-03-08 00:58:10.152809 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-08 00:58:10.152818 | orchestrator | Sunday 08 March 2026 00:56:09 +0000 (0:00:00.365) 0:00:12.980 ********** 2026-03-08 00:58:10.152826 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:58:10.152834 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:58:10.152842 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:58:10.152849 | orchestrator | 2026-03-08 00:58:10.152857 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-08 00:58:10.152870 | orchestrator | Sunday 08 March 2026 00:56:09 +0000 (0:00:00.437) 0:00:13.417 ********** 2026-03-08 00:58:10.152879 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:58:10.152888 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:58:10.152897 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:58:10.152906 | orchestrator | 2026-03-08 00:58:10.152915 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-08 00:58:10.152924 | orchestrator | Sunday 08 March 2026 00:56:10 +0000 (0:00:00.488) 0:00:13.906 ********** 2026-03-08 00:58:10.152933 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:58:10.152942 | orchestrator | 2026-03-08 00:58:10.152951 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-08 00:58:10.152960 | orchestrator | Sunday 08 March 2026 00:56:10 +0000 (0:00:00.147) 0:00:14.054 ********** 2026-03-08 00:58:10.152970 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:58:10.152976 | orchestrator | 2026-03-08 00:58:10.152982 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-08 00:58:10.152987 | orchestrator | Sunday 08 March 2026 00:56:10 +0000 (0:00:00.246) 0:00:14.300 ********** 2026-03-08 00:58:10.152992 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:58:10.152997 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:58:10.153002 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:58:10.153007 | orchestrator | 2026-03-08 00:58:10.153013 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-08 00:58:10.153018 | orchestrator | Sunday 08 March 2026 00:56:10 +0000 (0:00:00.273) 0:00:14.574 ********** 2026-03-08 00:58:10.153028 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:58:10.153033 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:58:10.153038 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:58:10.153044 | orchestrator | 2026-03-08 00:58:10.153049 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-08 00:58:10.153054 | orchestrator | Sunday 08 March 2026 00:56:11 +0000 (0:00:00.328) 0:00:14.902 ********** 2026-03-08 00:58:10.153059 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:58:10.153064 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:58:10.153069 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:58:10.153074 | orchestrator | 2026-03-08 00:58:10.153079 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-08 00:58:10.153085 | orchestrator | Sunday 08 March 2026 00:56:11 +0000 (0:00:00.536) 0:00:15.439 ********** 2026-03-08 00:58:10.153090 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:58:10.153095 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:58:10.153100 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:58:10.153105 | orchestrator | 2026-03-08 00:58:10.153110 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-08 00:58:10.153115 | orchestrator | Sunday 08 March 2026 00:56:12 +0000 (0:00:00.390) 0:00:15.829 ********** 2026-03-08 00:58:10.153120 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:58:10.153125 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:58:10.153130 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:58:10.153136 | orchestrator | 2026-03-08 00:58:10.153141 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-08 00:58:10.153146 | orchestrator | Sunday 08 March 2026 00:56:12 +0000 (0:00:00.345) 0:00:16.174 ********** 2026-03-08 00:58:10.153151 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:58:10.153156 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:58:10.153161 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:58:10.153190 | orchestrator | 2026-03-08 00:58:10.153196 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-08 00:58:10.153201 | orchestrator | Sunday 08 March 2026 00:56:12 +0000 (0:00:00.406) 0:00:16.580 ********** 2026-03-08 00:58:10.153206 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:58:10.153212 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:58:10.153217 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:58:10.153222 | orchestrator | 2026-03-08 00:58:10.153227 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-08 00:58:10.153232 | orchestrator | Sunday 08 March 2026 00:56:13 +0000 (0:00:00.501) 0:00:17.082 ********** 2026-03-08 00:58:10.153238 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--fb6eff58--5334--5828--9091--c0c39e64aeb1-osd--block--fb6eff58--5334--5828--9091--c0c39e64aeb1', 'dm-uuid-LVM-i9Xp5FUImtPtfN54C9ErRcykIZxaciZ8LXUwAGaSEtefK9rOU9kaKk7rZR7ptQZ6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-08 00:58:10.153245 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e3bef375--74a7--543b--9618--1787c99aecbb-osd--block--e3bef375--74a7--543b--9618--1787c99aecbb', 'dm-uuid-LVM-lHTKlioALzvrdCWxIUOY32laezYa9plhCTJmFyMIYqzt4GULUEK4IqtgTGpoAbH2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-08 00:58:10.153253 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:58:10.153262 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:58:10.153267 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:58:10.153273 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:58:10.153278 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:58:10.153300 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:58:10.153306 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:58:10.153312 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:58:10.153322 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c560df89-ac9f-43eb-b629-a1334440ff2f', 'scsi-SQEMU_QEMU_HARDDISK_c560df89-ac9f-43eb-b629-a1334440ff2f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c560df89-ac9f-43eb-b629-a1334440ff2f-part1', 'scsi-SQEMU_QEMU_HARDDISK_c560df89-ac9f-43eb-b629-a1334440ff2f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c560df89-ac9f-43eb-b629-a1334440ff2f-part14', 'scsi-SQEMU_QEMU_HARDDISK_c560df89-ac9f-43eb-b629-a1334440ff2f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c560df89-ac9f-43eb-b629-a1334440ff2f-part15', 'scsi-SQEMU_QEMU_HARDDISK_c560df89-ac9f-43eb-b629-a1334440ff2f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c560df89-ac9f-43eb-b629-a1334440ff2f-part16', 'scsi-SQEMU_QEMU_HARDDISK_c560df89-ac9f-43eb-b629-a1334440ff2f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-08 00:58:10.153332 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e9614fc2--8329--596c--937c--60ceb39d5fd3-osd--block--e9614fc2--8329--596c--937c--60ceb39d5fd3', 'dm-uuid-LVM-A6sX8tBZd3f7ouAe7LbLRKt8yUuKL0IDAxAZcluQUQudt0215DlOFuVxUcuxbYVY'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-08 00:58:10.153352 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--fb6eff58--5334--5828--9091--c0c39e64aeb1-osd--block--fb6eff58--5334--5828--9091--c0c39e64aeb1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-CgeyGY-4o5N-jaLE-Ybsd-Xi8d-yVB4-37QTGL', 'scsi-0QEMU_QEMU_HARDDISK_d9cf7a23-7f28-4003-9453-869e07fd4fea', 'scsi-SQEMU_QEMU_HARDDISK_d9cf7a23-7f28-4003-9453-869e07fd4fea'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-08 00:58:10.153359 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--eb569be8--41bf--5aa1--acb9--f145abad3137-osd--block--eb569be8--41bf--5aa1--acb9--f145abad3137', 'dm-uuid-LVM-sKJiMMw0cExulSsyIHg8glBLvDfU3ZtqvP3kpDXrQBSsu6FbQiwuhHaTocE12knM'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-08 00:58:10.153365 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--e3bef375--74a7--543b--9618--1787c99aecbb-osd--block--e3bef375--74a7--543b--9618--1787c99aecbb'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-PF0bub-Ex82-boiQ-txFA-GEv1-V0IY-tU6VIs', 'scsi-0QEMU_QEMU_HARDDISK_26ccb454-a8ab-488a-9282-a29bd19f440f', 'scsi-SQEMU_QEMU_HARDDISK_26ccb454-a8ab-488a-9282-a29bd19f440f'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-08 00:58:10.153375 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:58:10.153381 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f69177ca-c9b7-4ecf-919e-98158e504d7d', 'scsi-SQEMU_QEMU_HARDDISK_f69177ca-c9b7-4ecf-919e-98158e504d7d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-08 00:58:10.153387 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:58:10.153392 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-08-00-02-41-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-08 00:58:10.153413 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:58:10.153420 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:58:10.153425 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:58:10.153430 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:58:10.153436 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:58:10.153449 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:58:10.153457 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:58:10.153466 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_544edfd2-ddc4-4596-85df-1c9b9e7c3b59', 'scsi-SQEMU_QEMU_HARDDISK_544edfd2-ddc4-4596-85df-1c9b9e7c3b59'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_544edfd2-ddc4-4596-85df-1c9b9e7c3b59-part1', 'scsi-SQEMU_QEMU_HARDDISK_544edfd2-ddc4-4596-85df-1c9b9e7c3b59-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_544edfd2-ddc4-4596-85df-1c9b9e7c3b59-part14', 'scsi-SQEMU_QEMU_HARDDISK_544edfd2-ddc4-4596-85df-1c9b9e7c3b59-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_544edfd2-ddc4-4596-85df-1c9b9e7c3b59-part15', 'scsi-SQEMU_QEMU_HARDDISK_544edfd2-ddc4-4596-85df-1c9b9e7c3b59-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_544edfd2-ddc4-4596-85df-1c9b9e7c3b59-part16', 'scsi-SQEMU_QEMU_HARDDISK_544edfd2-ddc4-4596-85df-1c9b9e7c3b59-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-08 00:58:10.153473 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--e9614fc2--8329--596c--937c--60ceb39d5fd3-osd--block--e9614fc2--8329--596c--937c--60ceb39d5fd3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-HrRq55-4gpm-0vnp-o3sj-TvyH-5XAh-qNEgG1', 'scsi-0QEMU_QEMU_HARDDISK_581ffd65-22a4-4ef2-934b-fe47abf1be5c', 'scsi-SQEMU_QEMU_HARDDISK_581ffd65-22a4-4ef2-934b-fe47abf1be5c'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-08 00:58:10.153483 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--eb569be8--41bf--5aa1--acb9--f145abad3137-osd--block--eb569be8--41bf--5aa1--acb9--f145abad3137'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-AUddYJ-aAA1-Mgqt-B2eI-RKBS-JglY-blMXBN', 'scsi-0QEMU_QEMU_HARDDISK_2f73f377-a3b9-4553-a6d0-e21973e3a5e5', 'scsi-SQEMU_QEMU_HARDDISK_2f73f377-a3b9-4553-a6d0-e21973e3a5e5'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-08 00:58:10.153491 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5bde4b8d--c924--5d1f--8c9a--71f523250ead-osd--block--5bde4b8d--c924--5d1f--8c9a--71f523250ead', 'dm-uuid-LVM-nHFytykV0Xq8u8fjA5hGQa4Cn7XhkTNmvkeLvLgPXHeoyLboVG1ltbWGS54dxNZ6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-08 00:58:10.153496 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d4cf331-77e8-4e4e-b490-10f0636e01e9', 'scsi-SQEMU_QEMU_HARDDISK_1d4cf331-77e8-4e4e-b490-10f0636e01e9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-08 00:58:10.153502 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-08-00-02-42-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-08 00:58:10.153512 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ad275011--1eda--59d8--b818--a96e3c140717-osd--block--ad275011--1eda--59d8--b818--a96e3c140717', 'dm-uuid-LVM-52Zq5ucCtcvbGnpmAUTA1jJlUb8YWeRpVDHZgB300qIhha9jZhACuwUx3qWK1rRI'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-08 00:58:10.153517 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:58:10.153523 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:58:10.153528 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:58:10.153537 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:58:10.153545 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:58:10.153550 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:58:10.153555 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:58:10.153561 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:58:10.153584 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-08 00:58:10.153600 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1404ed60-298a-412c-bd4f-1e90f35345d3', 'scsi-SQEMU_QEMU_HARDDISK_1404ed60-298a-412c-bd4f-1e90f35345d3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1404ed60-298a-412c-bd4f-1e90f35345d3-part1', 'scsi-SQEMU_QEMU_HARDDISK_1404ed60-298a-412c-bd4f-1e90f35345d3-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1404ed60-298a-412c-bd4f-1e90f35345d3-part14', 'scsi-SQEMU_QEMU_HARDDISK_1404ed60-298a-412c-bd4f-1e90f35345d3-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1404ed60-298a-412c-bd4f-1e90f35345d3-part15', 'scsi-SQEMU_QEMU_HARDDISK_1404ed60-298a-412c-bd4f-1e90f35345d3-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1404ed60-298a-412c-bd4f-1e90f35345d3-part16', 'scsi-SQEMU_QEMU_HARDDISK_1404ed60-298a-412c-bd4f-1e90f35345d3-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-08 00:58:10.153618 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--5bde4b8d--c924--5d1f--8c9a--71f523250ead-osd--block--5bde4b8d--c924--5d1f--8c9a--71f523250ead'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-M4Otux-6dey-Ma9v-e8Ja-5EGJ-G046-HaA2BM', 'scsi-0QEMU_QEMU_HARDDISK_a9abd44a-efa3-4fc9-810c-e4cec7375a49', 'scsi-SQEMU_QEMU_HARDDISK_a9abd44a-efa3-4fc9-810c-e4cec7375a49'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-08 00:58:10.153628 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--ad275011--1eda--59d8--b818--a96e3c140717-osd--block--ad275011--1eda--59d8--b818--a96e3c140717'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-7Xg0Dw-SHI6-t4Km-ifTF-zLGd-Zegk-01LUBG', 'scsi-0QEMU_QEMU_HARDDISK_70953687-69fa-4056-8e35-7089ee1c64ea', 'scsi-SQEMU_QEMU_HARDDISK_70953687-69fa-4056-8e35-7089ee1c64ea'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-08 00:58:10.153636 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7bc88367-6aaf-4ded-8fa4-f9240096c464', 'scsi-SQEMU_QEMU_HARDDISK_7bc88367-6aaf-4ded-8fa4-f9240096c464'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-08 00:58:10.153650 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-08-00-02-50-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-08 00:58:10.153658 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:58:10.153666 | orchestrator | 2026-03-08 00:58:10.153674 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-08 00:58:10.153687 | orchestrator | Sunday 08 March 2026 00:56:13 +0000 (0:00:00.541) 0:00:17.623 ********** 2026-03-08 00:58:10.153697 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--fb6eff58--5334--5828--9091--c0c39e64aeb1-osd--block--fb6eff58--5334--5828--9091--c0c39e64aeb1', 'dm-uuid-LVM-i9Xp5FUImtPtfN54C9ErRcykIZxaciZ8LXUwAGaSEtefK9rOU9kaKk7rZR7ptQZ6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:10.153709 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e3bef375--74a7--543b--9618--1787c99aecbb-osd--block--e3bef375--74a7--543b--9618--1787c99aecbb', 'dm-uuid-LVM-lHTKlioALzvrdCWxIUOY32laezYa9plhCTJmFyMIYqzt4GULUEK4IqtgTGpoAbH2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:10.153717 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:10.153726 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:10.153734 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:10.153748 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:10.153763 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:10.153771 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:10.153782 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:10.153792 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e9614fc2--8329--596c--937c--60ceb39d5fd3-osd--block--e9614fc2--8329--596c--937c--60ceb39d5fd3', 'dm-uuid-LVM-A6sX8tBZd3f7ouAe7LbLRKt8yUuKL0IDAxAZcluQUQudt0215DlOFuVxUcuxbYVY'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:10.153800 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:10.153814 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--eb569be8--41bf--5aa1--acb9--f145abad3137-osd--block--eb569be8--41bf--5aa1--acb9--f145abad3137', 'dm-uuid-LVM-sKJiMMw0cExulSsyIHg8glBLvDfU3ZtqvP3kpDXrQBSsu6FbQiwuhHaTocE12knM'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:10.153833 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c560df89-ac9f-43eb-b629-a1334440ff2f', 'scsi-SQEMU_QEMU_HARDDISK_c560df89-ac9f-43eb-b629-a1334440ff2f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c560df89-ac9f-43eb-b629-a1334440ff2f-part1', 'scsi-SQEMU_QEMU_HARDDISK_c560df89-ac9f-43eb-b629-a1334440ff2f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c560df89-ac9f-43eb-b629-a1334440ff2f-part14', 'scsi-SQEMU_QEMU_HARDDISK_c560df89-ac9f-43eb-b629-a1334440ff2f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c560df89-ac9f-43eb-b629-a1334440ff2f-part15', 'scsi-SQEMU_QEMU_HARDDISK_c560df89-ac9f-43eb-b629-a1334440ff2f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c560df89-ac9f-43eb-b629-a1334440ff2f-part16', 'scsi-SQEMU_QEMU_HARDDISK_c560df89-ac9f-43eb-b629-a1334440ff2f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:10.153843 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:10.153857 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--fb6eff58--5334--5828--9091--c0c39e64aeb1-osd--block--fb6eff58--5334--5828--9091--c0c39e64aeb1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-CgeyGY-4o5N-jaLE-Ybsd-Xi8d-yVB4-37QTGL', 'scsi-0QEMU_QEMU_HARDDISK_d9cf7a23-7f28-4003-9453-869e07fd4fea', 'scsi-SQEMU_QEMU_HARDDISK_d9cf7a23-7f28-4003-9453-869e07fd4fea'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:10.153872 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:10.153881 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--e3bef375--74a7--543b--9618--1787c99aecbb-osd--block--e3bef375--74a7--543b--9618--1787c99aecbb'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-PF0bub-Ex82-boiQ-txFA-GEv1-V0IY-tU6VIs', 'scsi-0QEMU_QEMU_HARDDISK_26ccb454-a8ab-488a-9282-a29bd19f440f', 'scsi-SQEMU_QEMU_HARDDISK_26ccb454-a8ab-488a-9282-a29bd19f440f'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:10.153897 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:10.153907 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f69177ca-c9b7-4ecf-919e-98158e504d7d', 'scsi-SQEMU_QEMU_HARDDISK_f69177ca-c9b7-4ecf-919e-98158e504d7d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:10.153917 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:10.153949 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-08-00-02-41-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:10.153961 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:10.153966 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:10.153972 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:58:10.153980 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:10.153985 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:10.153995 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_544edfd2-ddc4-4596-85df-1c9b9e7c3b59', 'scsi-SQEMU_QEMU_HARDDISK_544edfd2-ddc4-4596-85df-1c9b9e7c3b59'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_544edfd2-ddc4-4596-85df-1c9b9e7c3b59-part1', 'scsi-SQEMU_QEMU_HARDDISK_544edfd2-ddc4-4596-85df-1c9b9e7c3b59-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_544edfd2-ddc4-4596-85df-1c9b9e7c3b59-part14', 'scsi-SQEMU_QEMU_HARDDISK_544edfd2-ddc4-4596-85df-1c9b9e7c3b59-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_544edfd2-ddc4-4596-85df-1c9b9e7c3b59-part15', 'scsi-SQEMU_QEMU_HARDDISK_544edfd2-ddc4-4596-85df-1c9b9e7c3b59-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_544edfd2-ddc4-4596-85df-1c9b9e7c3b59-part16', 'scsi-SQEMU_QEMU_HARDDISK_544edfd2-ddc4-4596-85df-1c9b9e7c3b59-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:10.154008 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--e9614fc2--8329--596c--937c--60ceb39d5fd3-osd--block--e9614fc2--8329--596c--937c--60ceb39d5fd3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-HrRq55-4gpm-0vnp-o3sj-TvyH-5XAh-qNEgG1', 'scsi-0QEMU_QEMU_HARDDISK_581ffd65-22a4-4ef2-934b-fe47abf1be5c', 'scsi-SQEMU_QEMU_HARDDISK_581ffd65-22a4-4ef2-934b-fe47abf1be5c'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:10.154042 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--eb569be8--41bf--5aa1--acb9--f145abad3137-osd--block--eb569be8--41bf--5aa1--acb9--f145abad3137'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-AUddYJ-aAA1-Mgqt-B2eI-RKBS-JglY-blMXBN', 'scsi-0QEMU_QEMU_HARDDISK_2f73f377-a3b9-4553-a6d0-e21973e3a5e5', 'scsi-SQEMU_QEMU_HARDDISK_2f73f377-a3b9-4553-a6d0-e21973e3a5e5'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:10.154049 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d4cf331-77e8-4e4e-b490-10f0636e01e9', 'scsi-SQEMU_QEMU_HARDDISK_1d4cf331-77e8-4e4e-b490-10f0636e01e9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:10.154062 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-08-00-02-42-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:10.154068 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:58:10.154073 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5bde4b8d--c924--5d1f--8c9a--71f523250ead-osd--block--5bde4b8d--c924--5d1f--8c9a--71f523250ead', 'dm-uuid-LVM-nHFytykV0Xq8u8fjA5hGQa4Cn7XhkTNmvkeLvLgPXHeoyLboVG1ltbWGS54dxNZ6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:10.154081 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ad275011--1eda--59d8--b818--a96e3c140717-osd--block--ad275011--1eda--59d8--b818--a96e3c140717', 'dm-uuid-LVM-52Zq5ucCtcvbGnpmAUTA1jJlUb8YWeRpVDHZgB300qIhha9jZhACuwUx3qWK1rRI'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:10.154087 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:10.154092 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:10.154097 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:10.154110 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:10.154116 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:10.154121 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:10.154129 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:10.154134 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:10.154144 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1404ed60-298a-412c-bd4f-1e90f35345d3', 'scsi-SQEMU_QEMU_HARDDISK_1404ed60-298a-412c-bd4f-1e90f35345d3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1404ed60-298a-412c-bd4f-1e90f35345d3-part1', 'scsi-SQEMU_QEMU_HARDDISK_1404ed60-298a-412c-bd4f-1e90f35345d3-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1404ed60-298a-412c-bd4f-1e90f35345d3-part14', 'scsi-SQEMU_QEMU_HARDDISK_1404ed60-298a-412c-bd4f-1e90f35345d3-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1404ed60-298a-412c-bd4f-1e90f35345d3-part15', 'scsi-SQEMU_QEMU_HARDDISK_1404ed60-298a-412c-bd4f-1e90f35345d3-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1404ed60-298a-412c-bd4f-1e90f35345d3-part16', 'scsi-SQEMU_QEMU_HARDDISK_1404ed60-298a-412c-bd4f-1e90f35345d3-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:10.154157 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--5bde4b8d--c924--5d1f--8c9a--71f523250ead-osd--block--5bde4b8d--c924--5d1f--8c9a--71f523250ead'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-M4Otux-6dey-Ma9v-e8Ja-5EGJ-G046-HaA2BM', 'scsi-0QEMU_QEMU_HARDDISK_a9abd44a-efa3-4fc9-810c-e4cec7375a49', 'scsi-SQEMU_QEMU_HARDDISK_a9abd44a-efa3-4fc9-810c-e4cec7375a49'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:10.154163 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--ad275011--1eda--59d8--b818--a96e3c140717-osd--block--ad275011--1eda--59d8--b818--a96e3c140717'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-7Xg0Dw-SHI6-t4Km-ifTF-zLGd-Zegk-01LUBG', 'scsi-0QEMU_QEMU_HARDDISK_70953687-69fa-4056-8e35-7089ee1c64ea', 'scsi-SQEMU_QEMU_HARDDISK_70953687-69fa-4056-8e35-7089ee1c64ea'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:10.154168 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7bc88367-6aaf-4ded-8fa4-f9240096c464', 'scsi-SQEMU_QEMU_HARDDISK_7bc88367-6aaf-4ded-8fa4-f9240096c464'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:10.154179 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-08-00-02-50-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-08 00:58:10.154185 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:58:10.154190 | orchestrator | 2026-03-08 00:58:10.154196 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-08 00:58:10.154201 | orchestrator | Sunday 08 March 2026 00:56:14 +0000 (0:00:00.723) 0:00:18.347 ********** 2026-03-08 00:58:10.154207 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:58:10.154212 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:58:10.154217 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:58:10.154222 | orchestrator | 2026-03-08 00:58:10.154228 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-08 00:58:10.154233 | orchestrator | Sunday 08 March 2026 00:56:15 +0000 (0:00:00.803) 0:00:19.150 ********** 2026-03-08 00:58:10.154247 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:58:10.154259 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:58:10.154264 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:58:10.154270 | orchestrator | 2026-03-08 00:58:10.154275 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-08 00:58:10.154280 | orchestrator | Sunday 08 March 2026 00:56:15 +0000 (0:00:00.523) 0:00:19.673 ********** 2026-03-08 00:58:10.154285 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:58:10.154290 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:58:10.154295 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:58:10.154300 | orchestrator | 2026-03-08 00:58:10.154305 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-08 00:58:10.154310 | orchestrator | Sunday 08 March 2026 00:56:16 +0000 (0:00:00.652) 0:00:20.326 ********** 2026-03-08 00:58:10.154315 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:58:10.154321 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:58:10.154326 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:58:10.154331 | orchestrator | 2026-03-08 00:58:10.154336 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-08 00:58:10.154341 | orchestrator | Sunday 08 March 2026 00:56:16 +0000 (0:00:00.315) 0:00:20.642 ********** 2026-03-08 00:58:10.154346 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:58:10.154351 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:58:10.154357 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:58:10.154362 | orchestrator | 2026-03-08 00:58:10.154367 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-08 00:58:10.154374 | orchestrator | Sunday 08 March 2026 00:56:17 +0000 (0:00:00.395) 0:00:21.037 ********** 2026-03-08 00:58:10.154379 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:58:10.154385 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:58:10.154393 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:58:10.154398 | orchestrator | 2026-03-08 00:58:10.154403 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-08 00:58:10.154409 | orchestrator | Sunday 08 March 2026 00:56:17 +0000 (0:00:00.525) 0:00:21.563 ********** 2026-03-08 00:58:10.154420 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-08 00:58:10.154425 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-08 00:58:10.154436 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-08 00:58:10.154441 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-08 00:58:10.154447 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-08 00:58:10.154452 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-08 00:58:10.154457 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-08 00:58:10.154462 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-08 00:58:10.154467 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-08 00:58:10.154472 | orchestrator | 2026-03-08 00:58:10.154477 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-08 00:58:10.154483 | orchestrator | Sunday 08 March 2026 00:56:18 +0000 (0:00:00.854) 0:00:22.418 ********** 2026-03-08 00:58:10.154488 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-08 00:58:10.154493 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-08 00:58:10.154498 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-08 00:58:10.154506 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:58:10.154515 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-08 00:58:10.154528 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-08 00:58:10.154539 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-08 00:58:10.154548 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:58:10.154557 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-08 00:58:10.154674 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-08 00:58:10.154695 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-08 00:58:10.154701 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:58:10.154706 | orchestrator | 2026-03-08 00:58:10.154711 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-08 00:58:10.154716 | orchestrator | Sunday 08 March 2026 00:56:18 +0000 (0:00:00.366) 0:00:22.785 ********** 2026-03-08 00:58:10.154722 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 00:58:10.154727 | orchestrator | 2026-03-08 00:58:10.154733 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-08 00:58:10.154738 | orchestrator | Sunday 08 March 2026 00:56:19 +0000 (0:00:00.706) 0:00:23.492 ********** 2026-03-08 00:58:10.154764 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:58:10.154770 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:58:10.154775 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:58:10.154781 | orchestrator | 2026-03-08 00:58:10.154786 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-08 00:58:10.154791 | orchestrator | Sunday 08 March 2026 00:56:19 +0000 (0:00:00.320) 0:00:23.812 ********** 2026-03-08 00:58:10.154796 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:58:10.154801 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:58:10.154807 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:58:10.154812 | orchestrator | 2026-03-08 00:58:10.154817 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-08 00:58:10.154822 | orchestrator | Sunday 08 March 2026 00:56:20 +0000 (0:00:00.350) 0:00:24.163 ********** 2026-03-08 00:58:10.154827 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:58:10.154839 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:58:10.154844 | orchestrator | skipping: [testbed-node-5] 2026-03-08 00:58:10.154849 | orchestrator | 2026-03-08 00:58:10.154854 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-08 00:58:10.154859 | orchestrator | Sunday 08 March 2026 00:56:20 +0000 (0:00:00.338) 0:00:24.501 ********** 2026-03-08 00:58:10.154865 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:58:10.154870 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:58:10.154875 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:58:10.154881 | orchestrator | 2026-03-08 00:58:10.154886 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-08 00:58:10.154891 | orchestrator | Sunday 08 March 2026 00:56:21 +0000 (0:00:00.632) 0:00:25.134 ********** 2026-03-08 00:58:10.154896 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-08 00:58:10.154901 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-08 00:58:10.154906 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-08 00:58:10.154911 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:58:10.154917 | orchestrator | 2026-03-08 00:58:10.154922 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-08 00:58:10.154927 | orchestrator | Sunday 08 March 2026 00:56:21 +0000 (0:00:00.408) 0:00:25.543 ********** 2026-03-08 00:58:10.154932 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-08 00:58:10.154937 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-08 00:58:10.154942 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-08 00:58:10.154948 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:58:10.154953 | orchestrator | 2026-03-08 00:58:10.154958 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-08 00:58:10.154963 | orchestrator | Sunday 08 March 2026 00:56:22 +0000 (0:00:00.372) 0:00:25.915 ********** 2026-03-08 00:58:10.154974 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-08 00:58:10.154979 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-08 00:58:10.154984 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-08 00:58:10.154990 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:58:10.154995 | orchestrator | 2026-03-08 00:58:10.155001 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-08 00:58:10.155010 | orchestrator | Sunday 08 March 2026 00:56:22 +0000 (0:00:00.380) 0:00:26.296 ********** 2026-03-08 00:58:10.155023 | orchestrator | ok: [testbed-node-3] 2026-03-08 00:58:10.155033 | orchestrator | ok: [testbed-node-4] 2026-03-08 00:58:10.155041 | orchestrator | ok: [testbed-node-5] 2026-03-08 00:58:10.155050 | orchestrator | 2026-03-08 00:58:10.155059 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-08 00:58:10.155067 | orchestrator | Sunday 08 March 2026 00:56:22 +0000 (0:00:00.312) 0:00:26.609 ********** 2026-03-08 00:58:10.155076 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-08 00:58:10.155085 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-08 00:58:10.155095 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-08 00:58:10.155104 | orchestrator | 2026-03-08 00:58:10.155113 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-08 00:58:10.155123 | orchestrator | Sunday 08 March 2026 00:56:23 +0000 (0:00:00.499) 0:00:27.108 ********** 2026-03-08 00:58:10.155130 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-08 00:58:10.155137 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-08 00:58:10.155146 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-08 00:58:10.155154 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-08 00:58:10.155161 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-08 00:58:10.155169 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-08 00:58:10.155183 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-08 00:58:10.155191 | orchestrator | 2026-03-08 00:58:10.155199 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-08 00:58:10.155207 | orchestrator | Sunday 08 March 2026 00:56:24 +0000 (0:00:01.028) 0:00:28.137 ********** 2026-03-08 00:58:10.155215 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-08 00:58:10.155223 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-08 00:58:10.155232 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-08 00:58:10.155241 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-08 00:58:10.155249 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-08 00:58:10.155259 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-08 00:58:10.155274 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-08 00:58:10.155284 | orchestrator | 2026-03-08 00:58:10.155293 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-03-08 00:58:10.155302 | orchestrator | Sunday 08 March 2026 00:56:26 +0000 (0:00:01.979) 0:00:30.117 ********** 2026-03-08 00:58:10.155311 | orchestrator | skipping: [testbed-node-3] 2026-03-08 00:58:10.155320 | orchestrator | skipping: [testbed-node-4] 2026-03-08 00:58:10.155330 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-03-08 00:58:10.155339 | orchestrator | 2026-03-08 00:58:10.155348 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-03-08 00:58:10.155357 | orchestrator | Sunday 08 March 2026 00:56:26 +0000 (0:00:00.371) 0:00:30.489 ********** 2026-03-08 00:58:10.155366 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-08 00:58:10.155377 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-08 00:58:10.155386 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-08 00:58:10.155397 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-08 00:58:10.155407 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-08 00:58:10.155413 | orchestrator | 2026-03-08 00:58:10.155418 | orchestrator | TASK [generate keys] *********************************************************** 2026-03-08 00:58:10.155423 | orchestrator | Sunday 08 March 2026 00:57:11 +0000 (0:00:44.913) 0:01:15.402 ********** 2026-03-08 00:58:10.155428 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-08 00:58:10.155434 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-08 00:58:10.155444 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-08 00:58:10.155450 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-08 00:58:10.155455 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-08 00:58:10.155460 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-08 00:58:10.155465 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-03-08 00:58:10.155470 | orchestrator | 2026-03-08 00:58:10.155475 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-03-08 00:58:10.155480 | orchestrator | Sunday 08 March 2026 00:57:36 +0000 (0:00:24.803) 0:01:40.206 ********** 2026-03-08 00:58:10.155485 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-08 00:58:10.155490 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-08 00:58:10.155495 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-08 00:58:10.155500 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-08 00:58:10.155505 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-08 00:58:10.155510 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-08 00:58:10.155515 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-08 00:58:10.155520 | orchestrator | 2026-03-08 00:58:10.155525 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-03-08 00:58:10.155530 | orchestrator | Sunday 08 March 2026 00:57:48 +0000 (0:00:12.411) 0:01:52.617 ********** 2026-03-08 00:58:10.155535 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-08 00:58:10.155540 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-08 00:58:10.155545 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-08 00:58:10.155550 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-08 00:58:10.155555 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-08 00:58:10.155621 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-08 00:58:10.155630 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-08 00:58:10.155635 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-08 00:58:10.155640 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-08 00:58:10.155645 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-08 00:58:10.155650 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-08 00:58:10.155655 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-08 00:58:10.155660 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-08 00:58:10.155665 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-08 00:58:10.155670 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-08 00:58:10.155675 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-08 00:58:10.155681 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-08 00:58:10.155686 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-08 00:58:10.155691 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-03-08 00:58:10.155696 | orchestrator | 2026-03-08 00:58:10.155701 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:58:10.155710 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-03-08 00:58:10.155716 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-03-08 00:58:10.155721 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-08 00:58:10.155726 | orchestrator | 2026-03-08 00:58:10.155732 | orchestrator | 2026-03-08 00:58:10.155737 | orchestrator | 2026-03-08 00:58:10.155742 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 00:58:10.155747 | orchestrator | Sunday 08 March 2026 00:58:07 +0000 (0:00:19.174) 0:02:11.792 ********** 2026-03-08 00:58:10.155761 | orchestrator | =============================================================================== 2026-03-08 00:58:10.155766 | orchestrator | create openstack pool(s) ----------------------------------------------- 44.91s 2026-03-08 00:58:10.155771 | orchestrator | generate keys ---------------------------------------------------------- 24.80s 2026-03-08 00:58:10.155776 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 19.17s 2026-03-08 00:58:10.155781 | orchestrator | get keys from monitors ------------------------------------------------- 12.41s 2026-03-08 00:58:10.155786 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.27s 2026-03-08 00:58:10.155791 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.98s 2026-03-08 00:58:10.155796 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.88s 2026-03-08 00:58:10.155801 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 1.03s 2026-03-08 00:58:10.155806 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.89s 2026-03-08 00:58:10.155811 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.86s 2026-03-08 00:58:10.155816 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.80s 2026-03-08 00:58:10.155821 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.77s 2026-03-08 00:58:10.155826 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.72s 2026-03-08 00:58:10.155831 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.71s 2026-03-08 00:58:10.155836 | orchestrator | ceph-facts : Check for a ceph mon socket -------------------------------- 0.71s 2026-03-08 00:58:10.155841 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.70s 2026-03-08 00:58:10.155846 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.65s 2026-03-08 00:58:10.155851 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.65s 2026-03-08 00:58:10.155856 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.63s 2026-03-08 00:58:10.155861 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.63s 2026-03-08 00:58:10.155866 | orchestrator | 2026-03-08 00:58:10 | INFO  | Task 5bf92245-aaca-445b-84e1-c2ecb4e4a3e2 is in state STARTED 2026-03-08 00:58:10.155871 | orchestrator | 2026-03-08 00:58:10 | INFO  | Task 3f42d72f-124d-4ed7-b37f-8af5d1d153a6 is in state STARTED 2026-03-08 00:58:10.155876 | orchestrator | 2026-03-08 00:58:10 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:58:13.203382 | orchestrator | 2026-03-08 00:58:13 | INFO  | Task f6f0ca12-a956-4870-8336-bfc7ab47c4a9 is in state STARTED 2026-03-08 00:58:13.205200 | orchestrator | 2026-03-08 00:58:13 | INFO  | Task 5bf92245-aaca-445b-84e1-c2ecb4e4a3e2 is in state STARTED 2026-03-08 00:58:13.207215 | orchestrator | 2026-03-08 00:58:13 | INFO  | Task 3f42d72f-124d-4ed7-b37f-8af5d1d153a6 is in state STARTED 2026-03-08 00:58:13.207276 | orchestrator | 2026-03-08 00:58:13 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:58:16.254174 | orchestrator | 2026-03-08 00:58:16 | INFO  | Task f6f0ca12-a956-4870-8336-bfc7ab47c4a9 is in state STARTED 2026-03-08 00:58:16.255841 | orchestrator | 2026-03-08 00:58:16 | INFO  | Task 5bf92245-aaca-445b-84e1-c2ecb4e4a3e2 is in state STARTED 2026-03-08 00:58:16.257281 | orchestrator | 2026-03-08 00:58:16 | INFO  | Task 3f42d72f-124d-4ed7-b37f-8af5d1d153a6 is in state STARTED 2026-03-08 00:58:16.257338 | orchestrator | 2026-03-08 00:58:16 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:58:19.309871 | orchestrator | 2026-03-08 00:58:19 | INFO  | Task f6f0ca12-a956-4870-8336-bfc7ab47c4a9 is in state STARTED 2026-03-08 00:58:19.311333 | orchestrator | 2026-03-08 00:58:19 | INFO  | Task 5bf92245-aaca-445b-84e1-c2ecb4e4a3e2 is in state STARTED 2026-03-08 00:58:19.313366 | orchestrator | 2026-03-08 00:58:19 | INFO  | Task 3f42d72f-124d-4ed7-b37f-8af5d1d153a6 is in state STARTED 2026-03-08 00:58:19.313673 | orchestrator | 2026-03-08 00:58:19 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:58:22.350104 | orchestrator | 2026-03-08 00:58:22 | INFO  | Task f6f0ca12-a956-4870-8336-bfc7ab47c4a9 is in state STARTED 2026-03-08 00:58:22.352172 | orchestrator | 2026-03-08 00:58:22 | INFO  | Task 5bf92245-aaca-445b-84e1-c2ecb4e4a3e2 is in state STARTED 2026-03-08 00:58:22.354913 | orchestrator | 2026-03-08 00:58:22 | INFO  | Task 3f42d72f-124d-4ed7-b37f-8af5d1d153a6 is in state STARTED 2026-03-08 00:58:22.355104 | orchestrator | 2026-03-08 00:58:22 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:58:25.405906 | orchestrator | 2026-03-08 00:58:25 | INFO  | Task f6f0ca12-a956-4870-8336-bfc7ab47c4a9 is in state STARTED 2026-03-08 00:58:25.408936 | orchestrator | 2026-03-08 00:58:25 | INFO  | Task 5bf92245-aaca-445b-84e1-c2ecb4e4a3e2 is in state STARTED 2026-03-08 00:58:25.410104 | orchestrator | 2026-03-08 00:58:25 | INFO  | Task 3f42d72f-124d-4ed7-b37f-8af5d1d153a6 is in state STARTED 2026-03-08 00:58:25.410641 | orchestrator | 2026-03-08 00:58:25 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:58:28.453786 | orchestrator | 2026-03-08 00:58:28 | INFO  | Task f6f0ca12-a956-4870-8336-bfc7ab47c4a9 is in state STARTED 2026-03-08 00:58:28.455790 | orchestrator | 2026-03-08 00:58:28 | INFO  | Task 5bf92245-aaca-445b-84e1-c2ecb4e4a3e2 is in state STARTED 2026-03-08 00:58:28.455847 | orchestrator | 2026-03-08 00:58:28 | INFO  | Task 3f42d72f-124d-4ed7-b37f-8af5d1d153a6 is in state STARTED 2026-03-08 00:58:28.455856 | orchestrator | 2026-03-08 00:58:28 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:58:31.506951 | orchestrator | 2026-03-08 00:58:31 | INFO  | Task f6f0ca12-a956-4870-8336-bfc7ab47c4a9 is in state STARTED 2026-03-08 00:58:31.507014 | orchestrator | 2026-03-08 00:58:31 | INFO  | Task 5bf92245-aaca-445b-84e1-c2ecb4e4a3e2 is in state STARTED 2026-03-08 00:58:31.507727 | orchestrator | 2026-03-08 00:58:31 | INFO  | Task 3f42d72f-124d-4ed7-b37f-8af5d1d153a6 is in state STARTED 2026-03-08 00:58:31.507780 | orchestrator | 2026-03-08 00:58:31 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:58:34.557195 | orchestrator | 2026-03-08 00:58:34.557269 | orchestrator | 2026-03-08 00:58:34.557275 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-08 00:58:34.557281 | orchestrator | 2026-03-08 00:58:34.557285 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-08 00:58:34.557290 | orchestrator | Sunday 08 March 2026 00:56:58 +0000 (0:00:00.272) 0:00:00.272 ********** 2026-03-08 00:58:34.557294 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:58:34.557299 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:58:34.557303 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:58:34.557325 | orchestrator | 2026-03-08 00:58:34.557329 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-08 00:58:34.557333 | orchestrator | Sunday 08 March 2026 00:56:58 +0000 (0:00:00.320) 0:00:00.592 ********** 2026-03-08 00:58:34.557337 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-03-08 00:58:34.557350 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-03-08 00:58:34.557355 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-03-08 00:58:34.557359 | orchestrator | 2026-03-08 00:58:34.557363 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-03-08 00:58:34.557367 | orchestrator | 2026-03-08 00:58:34.557371 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-08 00:58:34.557374 | orchestrator | Sunday 08 March 2026 00:56:58 +0000 (0:00:00.442) 0:00:01.035 ********** 2026-03-08 00:58:34.557379 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:58:34.557384 | orchestrator | 2026-03-08 00:58:34.557388 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-03-08 00:58:34.557391 | orchestrator | Sunday 08 March 2026 00:56:59 +0000 (0:00:00.519) 0:00:01.555 ********** 2026-03-08 00:58:34.557409 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-08 00:58:34.557433 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-08 00:58:34.557446 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-08 00:58:34.557450 | orchestrator | 2026-03-08 00:58:34.557454 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-03-08 00:58:34.557458 | orchestrator | Sunday 08 March 2026 00:57:00 +0000 (0:00:01.192) 0:00:02.748 ********** 2026-03-08 00:58:34.557462 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:58:34.557469 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:58:34.557473 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:58:34.557476 | orchestrator | 2026-03-08 00:58:34.557480 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-08 00:58:34.557484 | orchestrator | Sunday 08 March 2026 00:57:01 +0000 (0:00:00.560) 0:00:03.308 ********** 2026-03-08 00:58:34.557488 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-08 00:58:34.557497 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-08 00:58:34.557503 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-03-08 00:58:34.557510 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-03-08 00:58:34.557800 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-03-08 00:58:34.557822 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-03-08 00:58:34.557829 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-03-08 00:58:34.557835 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-03-08 00:58:34.557840 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-08 00:58:34.557846 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-08 00:58:34.557852 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-03-08 00:58:34.557857 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-03-08 00:58:34.557864 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-03-08 00:58:34.557870 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-03-08 00:58:34.557876 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-03-08 00:58:34.557882 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-03-08 00:58:34.557888 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-08 00:58:34.557894 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-08 00:58:34.557900 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-03-08 00:58:34.557906 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-03-08 00:58:34.557912 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-03-08 00:58:34.557919 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-03-08 00:58:34.557923 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-03-08 00:58:34.557927 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-03-08 00:58:34.557932 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-03-08 00:58:34.557938 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-03-08 00:58:34.557942 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-03-08 00:58:34.557946 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-03-08 00:58:34.557950 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-03-08 00:58:34.557968 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-03-08 00:58:34.557972 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-03-08 00:58:34.557976 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-03-08 00:58:34.557980 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-03-08 00:58:34.557985 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-03-08 00:58:34.557989 | orchestrator | 2026-03-08 00:58:34.557993 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-08 00:58:34.557997 | orchestrator | Sunday 08 March 2026 00:57:01 +0000 (0:00:00.805) 0:00:04.114 ********** 2026-03-08 00:58:34.558001 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:58:34.558005 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:58:34.558009 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:58:34.558051 | orchestrator | 2026-03-08 00:58:34.558056 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-08 00:58:34.558060 | orchestrator | Sunday 08 March 2026 00:57:02 +0000 (0:00:00.306) 0:00:04.420 ********** 2026-03-08 00:58:34.558063 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:58:34.558068 | orchestrator | 2026-03-08 00:58:34.558079 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-08 00:58:34.558083 | orchestrator | Sunday 08 March 2026 00:57:02 +0000 (0:00:00.135) 0:00:04.556 ********** 2026-03-08 00:58:34.558087 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:58:34.558091 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:58:34.558095 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:58:34.558098 | orchestrator | 2026-03-08 00:58:34.558102 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-08 00:58:34.558106 | orchestrator | Sunday 08 March 2026 00:57:02 +0000 (0:00:00.488) 0:00:05.044 ********** 2026-03-08 00:58:34.558127 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:58:34.558132 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:58:34.558136 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:58:34.558139 | orchestrator | 2026-03-08 00:58:34.558143 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-08 00:58:34.558147 | orchestrator | Sunday 08 March 2026 00:57:03 +0000 (0:00:00.304) 0:00:05.349 ********** 2026-03-08 00:58:34.558151 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:58:34.558155 | orchestrator | 2026-03-08 00:58:34.558158 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-08 00:58:34.558162 | orchestrator | Sunday 08 March 2026 00:57:03 +0000 (0:00:00.114) 0:00:05.464 ********** 2026-03-08 00:58:34.558166 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:58:34.558170 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:58:34.558174 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:58:34.558178 | orchestrator | 2026-03-08 00:58:34.558181 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-08 00:58:34.558185 | orchestrator | Sunday 08 March 2026 00:57:03 +0000 (0:00:00.343) 0:00:05.807 ********** 2026-03-08 00:58:34.558189 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:58:34.558192 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:58:34.558196 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:58:34.558200 | orchestrator | 2026-03-08 00:58:34.558204 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-08 00:58:34.558208 | orchestrator | Sunday 08 March 2026 00:57:03 +0000 (0:00:00.336) 0:00:06.144 ********** 2026-03-08 00:58:34.558211 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:58:34.558220 | orchestrator | 2026-03-08 00:58:34.558223 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-08 00:58:34.558227 | orchestrator | Sunday 08 March 2026 00:57:04 +0000 (0:00:00.340) 0:00:06.485 ********** 2026-03-08 00:58:34.558231 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:58:34.558235 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:58:34.558238 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:58:34.558242 | orchestrator | 2026-03-08 00:58:34.558246 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-08 00:58:34.558250 | orchestrator | Sunday 08 March 2026 00:57:04 +0000 (0:00:00.299) 0:00:06.784 ********** 2026-03-08 00:58:34.558253 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:58:34.558257 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:58:34.558261 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:58:34.558265 | orchestrator | 2026-03-08 00:58:34.558268 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-08 00:58:34.558272 | orchestrator | Sunday 08 March 2026 00:57:04 +0000 (0:00:00.328) 0:00:07.113 ********** 2026-03-08 00:58:34.558276 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:58:34.558280 | orchestrator | 2026-03-08 00:58:34.558283 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-08 00:58:34.558287 | orchestrator | Sunday 08 March 2026 00:57:05 +0000 (0:00:00.187) 0:00:07.301 ********** 2026-03-08 00:58:34.558291 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:58:34.558295 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:58:34.558322 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:58:34.558327 | orchestrator | 2026-03-08 00:58:34.558331 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-08 00:58:34.558334 | orchestrator | Sunday 08 March 2026 00:57:05 +0000 (0:00:00.343) 0:00:07.644 ********** 2026-03-08 00:58:34.558338 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:58:34.558342 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:58:34.558345 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:58:34.558349 | orchestrator | 2026-03-08 00:58:34.558353 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-08 00:58:34.558357 | orchestrator | Sunday 08 March 2026 00:57:05 +0000 (0:00:00.514) 0:00:08.158 ********** 2026-03-08 00:58:34.558360 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:58:34.558364 | orchestrator | 2026-03-08 00:58:34.558370 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-08 00:58:34.558374 | orchestrator | Sunday 08 March 2026 00:57:06 +0000 (0:00:00.128) 0:00:08.287 ********** 2026-03-08 00:58:34.558378 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:58:34.558381 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:58:34.558385 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:58:34.558389 | orchestrator | 2026-03-08 00:58:34.558392 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-08 00:58:34.558396 | orchestrator | Sunday 08 March 2026 00:57:06 +0000 (0:00:00.299) 0:00:08.587 ********** 2026-03-08 00:58:34.558400 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:58:34.558404 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:58:34.558407 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:58:34.558411 | orchestrator | 2026-03-08 00:58:34.558415 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-08 00:58:34.558419 | orchestrator | Sunday 08 March 2026 00:57:06 +0000 (0:00:00.322) 0:00:08.909 ********** 2026-03-08 00:58:34.558422 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:58:34.558426 | orchestrator | 2026-03-08 00:58:34.558430 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-08 00:58:34.558433 | orchestrator | Sunday 08 March 2026 00:57:06 +0000 (0:00:00.135) 0:00:09.045 ********** 2026-03-08 00:58:34.558437 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:58:34.558441 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:58:34.558445 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:58:34.558448 | orchestrator | 2026-03-08 00:58:34.558455 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-08 00:58:34.558463 | orchestrator | Sunday 08 March 2026 00:57:07 +0000 (0:00:00.279) 0:00:09.325 ********** 2026-03-08 00:58:34.558467 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:58:34.558471 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:58:34.558475 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:58:34.558478 | orchestrator | 2026-03-08 00:58:34.558482 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-08 00:58:34.558486 | orchestrator | Sunday 08 March 2026 00:57:07 +0000 (0:00:00.595) 0:00:09.921 ********** 2026-03-08 00:58:34.558490 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:58:34.558493 | orchestrator | 2026-03-08 00:58:34.558497 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-08 00:58:34.558501 | orchestrator | Sunday 08 March 2026 00:57:07 +0000 (0:00:00.124) 0:00:10.046 ********** 2026-03-08 00:58:34.558505 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:58:34.558508 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:58:34.558528 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:58:34.558533 | orchestrator | 2026-03-08 00:58:34.558537 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-08 00:58:34.558541 | orchestrator | Sunday 08 March 2026 00:57:08 +0000 (0:00:00.289) 0:00:10.335 ********** 2026-03-08 00:58:34.558544 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:58:34.558548 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:58:34.558552 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:58:34.558556 | orchestrator | 2026-03-08 00:58:34.558560 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-08 00:58:34.558563 | orchestrator | Sunday 08 March 2026 00:57:08 +0000 (0:00:00.324) 0:00:10.660 ********** 2026-03-08 00:58:34.558567 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:58:34.558571 | orchestrator | 2026-03-08 00:58:34.558575 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-08 00:58:34.558578 | orchestrator | Sunday 08 March 2026 00:57:08 +0000 (0:00:00.128) 0:00:10.789 ********** 2026-03-08 00:58:34.558582 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:58:34.558586 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:58:34.558590 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:58:34.558593 | orchestrator | 2026-03-08 00:58:34.558598 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-08 00:58:34.558604 | orchestrator | Sunday 08 March 2026 00:57:09 +0000 (0:00:00.553) 0:00:11.343 ********** 2026-03-08 00:58:34.558610 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:58:34.558615 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:58:34.558622 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:58:34.558632 | orchestrator | 2026-03-08 00:58:34.558638 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-08 00:58:34.558645 | orchestrator | Sunday 08 March 2026 00:57:09 +0000 (0:00:00.307) 0:00:11.651 ********** 2026-03-08 00:58:34.558651 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:58:34.558659 | orchestrator | 2026-03-08 00:58:34.558666 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-08 00:58:34.558672 | orchestrator | Sunday 08 March 2026 00:57:09 +0000 (0:00:00.166) 0:00:11.817 ********** 2026-03-08 00:58:34.558678 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:58:34.558684 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:58:34.558691 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:58:34.558698 | orchestrator | 2026-03-08 00:58:34.558704 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-08 00:58:34.558710 | orchestrator | Sunday 08 March 2026 00:57:09 +0000 (0:00:00.321) 0:00:12.139 ********** 2026-03-08 00:58:34.558717 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:58:34.558722 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:58:34.558731 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:58:34.558736 | orchestrator | 2026-03-08 00:58:34.558742 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-08 00:58:34.558754 | orchestrator | Sunday 08 March 2026 00:57:10 +0000 (0:00:00.323) 0:00:12.463 ********** 2026-03-08 00:58:34.558760 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:58:34.558766 | orchestrator | 2026-03-08 00:58:34.558773 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-08 00:58:34.558779 | orchestrator | Sunday 08 March 2026 00:57:10 +0000 (0:00:00.142) 0:00:12.605 ********** 2026-03-08 00:58:34.558785 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:58:34.558792 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:58:34.558798 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:58:34.558804 | orchestrator | 2026-03-08 00:58:34.558810 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-03-08 00:58:34.558817 | orchestrator | Sunday 08 March 2026 00:57:10 +0000 (0:00:00.502) 0:00:13.108 ********** 2026-03-08 00:58:34.558828 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:58:34.558834 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:58:34.558841 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:58:34.558845 | orchestrator | 2026-03-08 00:58:34.558849 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-03-08 00:58:34.558853 | orchestrator | Sunday 08 March 2026 00:57:12 +0000 (0:00:01.805) 0:00:14.913 ********** 2026-03-08 00:58:34.558857 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-08 00:58:34.558861 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-08 00:58:34.558864 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-08 00:58:34.558868 | orchestrator | 2026-03-08 00:58:34.558872 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-03-08 00:58:34.558875 | orchestrator | Sunday 08 March 2026 00:57:14 +0000 (0:00:02.183) 0:00:17.096 ********** 2026-03-08 00:58:34.558879 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-08 00:58:34.558884 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-08 00:58:34.558888 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-08 00:58:34.558892 | orchestrator | 2026-03-08 00:58:34.558895 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-03-08 00:58:34.558904 | orchestrator | Sunday 08 March 2026 00:57:17 +0000 (0:00:02.740) 0:00:19.837 ********** 2026-03-08 00:58:34.558908 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-08 00:58:34.558912 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-08 00:58:34.558915 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-08 00:58:34.558919 | orchestrator | 2026-03-08 00:58:34.558923 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-03-08 00:58:34.558927 | orchestrator | Sunday 08 March 2026 00:57:19 +0000 (0:00:02.262) 0:00:22.099 ********** 2026-03-08 00:58:34.558931 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:58:34.558934 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:58:34.558938 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:58:34.558942 | orchestrator | 2026-03-08 00:58:34.558946 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-03-08 00:58:34.558949 | orchestrator | Sunday 08 March 2026 00:57:20 +0000 (0:00:00.465) 0:00:22.565 ********** 2026-03-08 00:58:34.558953 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:58:34.558957 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:58:34.558961 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:58:34.558965 | orchestrator | 2026-03-08 00:58:34.558968 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-08 00:58:34.558975 | orchestrator | Sunday 08 March 2026 00:57:20 +0000 (0:00:00.304) 0:00:22.869 ********** 2026-03-08 00:58:34.558979 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:58:34.558983 | orchestrator | 2026-03-08 00:58:34.558987 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-03-08 00:58:34.558991 | orchestrator | Sunday 08 March 2026 00:57:21 +0000 (0:00:00.792) 0:00:23.662 ********** 2026-03-08 00:58:34.559004 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-08 00:58:34.559015 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-08 00:58:34.559027 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-08 00:58:34.559031 | orchestrator | 2026-03-08 00:58:34.559035 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-03-08 00:58:34.559039 | orchestrator | Sunday 08 March 2026 00:57:22 +0000 (0:00:01.515) 0:00:25.177 ********** 2026-03-08 00:58:34.559048 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-08 00:58:34.559059 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:58:34.559076 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']2026-03-08 00:58:34 | INFO  | Task f6f0ca12-a956-4870-8336-bfc7ab47c4a9 is in state SUCCESS 2026-03-08 00:58:34.559084 | orchestrator | }, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-08 00:58:34.559099 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:58:34.559106 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-08 00:58:34.559120 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:58:34.559125 | orchestrator | 2026-03-08 00:58:34.559131 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-03-08 00:58:34.559138 | orchestrator | Sunday 08 March 2026 00:57:23 +0000 (0:00:00.668) 0:00:25.846 ********** 2026-03-08 00:58:34.559154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-08 00:58:34.559167 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:58:34.559175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-08 00:58:34.559181 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:58:34.559197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-08 00:58:34.559205 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:58:34.559209 | orchestrator | 2026-03-08 00:58:34.559213 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2026-03-08 00:58:34.559217 | orchestrator | Sunday 08 March 2026 00:57:24 +0000 (0:00:00.820) 0:00:26.667 ********** 2026-03-08 00:58:34.559224 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-08 00:58:34.559233 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-08 00:58:34.559243 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-08 00:58:34.559247 | orchestrator | 2026-03-08 00:58:34.559251 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-08 00:58:34.559255 | orchestrator | Sunday 08 March 2026 00:57:26 +0000 (0:00:01.636) 0:00:28.303 ********** 2026-03-08 00:58:34.559259 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:58:34.559263 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:58:34.559266 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:58:34.559270 | orchestrator | 2026-03-08 00:58:34.559274 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-08 00:58:34.559278 | orchestrator | Sunday 08 March 2026 00:57:26 +0000 (0:00:00.306) 0:00:28.610 ********** 2026-03-08 00:58:34.559286 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:58:34.559290 | orchestrator | 2026-03-08 00:58:34.559296 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-03-08 00:58:34.559300 | orchestrator | Sunday 08 March 2026 00:57:26 +0000 (0:00:00.510) 0:00:29.120 ********** 2026-03-08 00:58:34.559303 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:58:34.559307 | orchestrator | 2026-03-08 00:58:34.559311 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-03-08 00:58:34.559315 | orchestrator | Sunday 08 March 2026 00:57:29 +0000 (0:00:02.651) 0:00:31.772 ********** 2026-03-08 00:58:34.559318 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:58:34.559322 | orchestrator | 2026-03-08 00:58:34.559326 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-03-08 00:58:34.559330 | orchestrator | Sunday 08 March 2026 00:57:32 +0000 (0:00:02.889) 0:00:34.661 ********** 2026-03-08 00:58:34.559333 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:58:34.559337 | orchestrator | 2026-03-08 00:58:34.559341 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-08 00:58:34.559345 | orchestrator | Sunday 08 March 2026 00:57:49 +0000 (0:00:16.783) 0:00:51.445 ********** 2026-03-08 00:58:34.559349 | orchestrator | 2026-03-08 00:58:34.559355 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-08 00:58:34.559361 | orchestrator | Sunday 08 March 2026 00:57:49 +0000 (0:00:00.068) 0:00:51.514 ********** 2026-03-08 00:58:34.559367 | orchestrator | 2026-03-08 00:58:34.559372 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-08 00:58:34.559378 | orchestrator | Sunday 08 March 2026 00:57:49 +0000 (0:00:00.068) 0:00:51.582 ********** 2026-03-08 00:58:34.559384 | orchestrator | 2026-03-08 00:58:34.559391 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-03-08 00:58:34.559397 | orchestrator | Sunday 08 March 2026 00:57:49 +0000 (0:00:00.069) 0:00:51.652 ********** 2026-03-08 00:58:34.559405 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:58:34.559410 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:58:34.559415 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:58:34.559422 | orchestrator | 2026-03-08 00:58:34.559428 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:58:34.559434 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-08 00:58:34.559440 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-03-08 00:58:34.559446 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-03-08 00:58:34.559453 | orchestrator | 2026-03-08 00:58:34.559458 | orchestrator | 2026-03-08 00:58:34.559466 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 00:58:34.559470 | orchestrator | Sunday 08 March 2026 00:58:32 +0000 (0:00:43.365) 0:01:35.018 ********** 2026-03-08 00:58:34.559474 | orchestrator | =============================================================================== 2026-03-08 00:58:34.559478 | orchestrator | horizon : Restart horizon container ------------------------------------ 43.37s 2026-03-08 00:58:34.559481 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 16.78s 2026-03-08 00:58:34.559485 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.89s 2026-03-08 00:58:34.559489 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.74s 2026-03-08 00:58:34.559493 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.65s 2026-03-08 00:58:34.559496 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 2.26s 2026-03-08 00:58:34.559504 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 2.18s 2026-03-08 00:58:34.559508 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.81s 2026-03-08 00:58:34.559512 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.64s 2026-03-08 00:58:34.559531 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.52s 2026-03-08 00:58:34.559540 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.19s 2026-03-08 00:58:34.559547 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.82s 2026-03-08 00:58:34.559553 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.81s 2026-03-08 00:58:34.559562 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.79s 2026-03-08 00:58:34.559570 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.67s 2026-03-08 00:58:34.559576 | orchestrator | horizon : Update policy file name --------------------------------------- 0.60s 2026-03-08 00:58:34.559582 | orchestrator | horizon : Set empty custom policy --------------------------------------- 0.56s 2026-03-08 00:58:34.559589 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.55s 2026-03-08 00:58:34.559594 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.52s 2026-03-08 00:58:34.559600 | orchestrator | horizon : Update policy file name --------------------------------------- 0.51s 2026-03-08 00:58:34.559606 | orchestrator | 2026-03-08 00:58:34 | INFO  | Task 5bf92245-aaca-445b-84e1-c2ecb4e4a3e2 is in state STARTED 2026-03-08 00:58:34.559612 | orchestrator | 2026-03-08 00:58:34 | INFO  | Task 3f42d72f-124d-4ed7-b37f-8af5d1d153a6 is in state STARTED 2026-03-08 00:58:34.559622 | orchestrator | 2026-03-08 00:58:34 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:58:37.600160 | orchestrator | 2026-03-08 00:58:37 | INFO  | Task 5bf92245-aaca-445b-84e1-c2ecb4e4a3e2 is in state STARTED 2026-03-08 00:58:37.601880 | orchestrator | 2026-03-08 00:58:37 | INFO  | Task 3f42d72f-124d-4ed7-b37f-8af5d1d153a6 is in state STARTED 2026-03-08 00:58:37.601933 | orchestrator | 2026-03-08 00:58:37 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:58:40.642452 | orchestrator | 2026-03-08 00:58:40 | INFO  | Task 5bf92245-aaca-445b-84e1-c2ecb4e4a3e2 is in state STARTED 2026-03-08 00:58:40.642606 | orchestrator | 2026-03-08 00:58:40 | INFO  | Task 3f42d72f-124d-4ed7-b37f-8af5d1d153a6 is in state STARTED 2026-03-08 00:58:40.642620 | orchestrator | 2026-03-08 00:58:40 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:58:43.688294 | orchestrator | 2026-03-08 00:58:43 | INFO  | Task 5bf92245-aaca-445b-84e1-c2ecb4e4a3e2 is in state STARTED 2026-03-08 00:58:43.689486 | orchestrator | 2026-03-08 00:58:43 | INFO  | Task 3f42d72f-124d-4ed7-b37f-8af5d1d153a6 is in state STARTED 2026-03-08 00:58:43.689583 | orchestrator | 2026-03-08 00:58:43 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:58:46.734466 | orchestrator | 2026-03-08 00:58:46 | INFO  | Task 5bf92245-aaca-445b-84e1-c2ecb4e4a3e2 is in state STARTED 2026-03-08 00:58:46.736233 | orchestrator | 2026-03-08 00:58:46 | INFO  | Task 3f42d72f-124d-4ed7-b37f-8af5d1d153a6 is in state STARTED 2026-03-08 00:58:46.736274 | orchestrator | 2026-03-08 00:58:46 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:58:49.789359 | orchestrator | 2026-03-08 00:58:49 | INFO  | Task 8459e362-f543-4453-85e7-046962e5a217 is in state STARTED 2026-03-08 00:58:49.790661 | orchestrator | 2026-03-08 00:58:49 | INFO  | Task 5bf92245-aaca-445b-84e1-c2ecb4e4a3e2 is in state SUCCESS 2026-03-08 00:58:49.792657 | orchestrator | 2026-03-08 00:58:49 | INFO  | Task 3f42d72f-124d-4ed7-b37f-8af5d1d153a6 is in state STARTED 2026-03-08 00:58:49.792756 | orchestrator | 2026-03-08 00:58:49 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:58:52.844863 | orchestrator | 2026-03-08 00:58:52 | INFO  | Task 8459e362-f543-4453-85e7-046962e5a217 is in state STARTED 2026-03-08 00:58:52.845564 | orchestrator | 2026-03-08 00:58:52 | INFO  | Task 3f42d72f-124d-4ed7-b37f-8af5d1d153a6 is in state STARTED 2026-03-08 00:58:52.845682 | orchestrator | 2026-03-08 00:58:52 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:58:55.900870 | orchestrator | 2026-03-08 00:58:55 | INFO  | Task 8459e362-f543-4453-85e7-046962e5a217 is in state STARTED 2026-03-08 00:58:55.900941 | orchestrator | 2026-03-08 00:58:55 | INFO  | Task 3f42d72f-124d-4ed7-b37f-8af5d1d153a6 is in state STARTED 2026-03-08 00:58:55.900948 | orchestrator | 2026-03-08 00:58:55 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:58:58.959544 | orchestrator | 2026-03-08 00:58:58 | INFO  | Task 8459e362-f543-4453-85e7-046962e5a217 is in state STARTED 2026-03-08 00:58:58.960449 | orchestrator | 2026-03-08 00:58:58 | INFO  | Task 3f42d72f-124d-4ed7-b37f-8af5d1d153a6 is in state STARTED 2026-03-08 00:58:58.960656 | orchestrator | 2026-03-08 00:58:58 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:59:01.997045 | orchestrator | 2026-03-08 00:59:02 | INFO  | Task 8459e362-f543-4453-85e7-046962e5a217 is in state STARTED 2026-03-08 00:59:01.998693 | orchestrator | 2026-03-08 00:59:02 | INFO  | Task 3f42d72f-124d-4ed7-b37f-8af5d1d153a6 is in state STARTED 2026-03-08 00:59:01.998735 | orchestrator | 2026-03-08 00:59:02 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:59:05.031871 | orchestrator | 2026-03-08 00:59:05 | INFO  | Task 8459e362-f543-4453-85e7-046962e5a217 is in state STARTED 2026-03-08 00:59:05.034785 | orchestrator | 2026-03-08 00:59:05 | INFO  | Task 3f42d72f-124d-4ed7-b37f-8af5d1d153a6 is in state STARTED 2026-03-08 00:59:05.034869 | orchestrator | 2026-03-08 00:59:05 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:59:08.082507 | orchestrator | 2026-03-08 00:59:08 | INFO  | Task 8459e362-f543-4453-85e7-046962e5a217 is in state STARTED 2026-03-08 00:59:08.085549 | orchestrator | 2026-03-08 00:59:08 | INFO  | Task 3f42d72f-124d-4ed7-b37f-8af5d1d153a6 is in state STARTED 2026-03-08 00:59:08.085595 | orchestrator | 2026-03-08 00:59:08 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:59:11.129523 | orchestrator | 2026-03-08 00:59:11 | INFO  | Task 8459e362-f543-4453-85e7-046962e5a217 is in state STARTED 2026-03-08 00:59:11.132070 | orchestrator | 2026-03-08 00:59:11 | INFO  | Task 3f42d72f-124d-4ed7-b37f-8af5d1d153a6 is in state STARTED 2026-03-08 00:59:11.132161 | orchestrator | 2026-03-08 00:59:11 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:59:14.169421 | orchestrator | 2026-03-08 00:59:14 | INFO  | Task 8459e362-f543-4453-85e7-046962e5a217 is in state STARTED 2026-03-08 00:59:14.171528 | orchestrator | 2026-03-08 00:59:14 | INFO  | Task 3f42d72f-124d-4ed7-b37f-8af5d1d153a6 is in state STARTED 2026-03-08 00:59:14.171589 | orchestrator | 2026-03-08 00:59:14 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:59:17.209851 | orchestrator | 2026-03-08 00:59:17 | INFO  | Task 8459e362-f543-4453-85e7-046962e5a217 is in state STARTED 2026-03-08 00:59:17.211500 | orchestrator | 2026-03-08 00:59:17 | INFO  | Task 3f42d72f-124d-4ed7-b37f-8af5d1d153a6 is in state STARTED 2026-03-08 00:59:17.211568 | orchestrator | 2026-03-08 00:59:17 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:59:20.262729 | orchestrator | 2026-03-08 00:59:20 | INFO  | Task 8459e362-f543-4453-85e7-046962e5a217 is in state STARTED 2026-03-08 00:59:20.263865 | orchestrator | 2026-03-08 00:59:20 | INFO  | Task 3f42d72f-124d-4ed7-b37f-8af5d1d153a6 is in state STARTED 2026-03-08 00:59:20.264508 | orchestrator | 2026-03-08 00:59:20 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:59:23.310888 | orchestrator | 2026-03-08 00:59:23 | INFO  | Task 8459e362-f543-4453-85e7-046962e5a217 is in state STARTED 2026-03-08 00:59:23.312456 | orchestrator | 2026-03-08 00:59:23 | INFO  | Task 3f42d72f-124d-4ed7-b37f-8af5d1d153a6 is in state STARTED 2026-03-08 00:59:23.312501 | orchestrator | 2026-03-08 00:59:23 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:59:26.358909 | orchestrator | 2026-03-08 00:59:26 | INFO  | Task 8459e362-f543-4453-85e7-046962e5a217 is in state STARTED 2026-03-08 00:59:26.360258 | orchestrator | 2026-03-08 00:59:26 | INFO  | Task 3f42d72f-124d-4ed7-b37f-8af5d1d153a6 is in state STARTED 2026-03-08 00:59:26.360398 | orchestrator | 2026-03-08 00:59:26 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:59:29.406094 | orchestrator | 2026-03-08 00:59:29 | INFO  | Task 8459e362-f543-4453-85e7-046962e5a217 is in state STARTED 2026-03-08 00:59:29.406942 | orchestrator | 2026-03-08 00:59:29 | INFO  | Task 3f42d72f-124d-4ed7-b37f-8af5d1d153a6 is in state STARTED 2026-03-08 00:59:29.407089 | orchestrator | 2026-03-08 00:59:29 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:59:32.456746 | orchestrator | 2026-03-08 00:59:32 | INFO  | Task 8459e362-f543-4453-85e7-046962e5a217 is in state STARTED 2026-03-08 00:59:32.459135 | orchestrator | 2026-03-08 00:59:32 | INFO  | Task 3f42d72f-124d-4ed7-b37f-8af5d1d153a6 is in state STARTED 2026-03-08 00:59:32.459348 | orchestrator | 2026-03-08 00:59:32 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:59:35.506766 | orchestrator | 2026-03-08 00:59:35 | INFO  | Task 8459e362-f543-4453-85e7-046962e5a217 is in state STARTED 2026-03-08 00:59:35.507375 | orchestrator | 2026-03-08 00:59:35 | INFO  | Task 3f42d72f-124d-4ed7-b37f-8af5d1d153a6 is in state STARTED 2026-03-08 00:59:35.507467 | orchestrator | 2026-03-08 00:59:35 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:59:38.550592 | orchestrator | 2026-03-08 00:59:38 | INFO  | Task 8459e362-f543-4453-85e7-046962e5a217 is in state STARTED 2026-03-08 00:59:38.551975 | orchestrator | 2026-03-08 00:59:38 | INFO  | Task 3f42d72f-124d-4ed7-b37f-8af5d1d153a6 is in state SUCCESS 2026-03-08 00:59:38.552539 | orchestrator | 2026-03-08 00:59:38.552582 | orchestrator | 2026-03-08 00:59:38.552593 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-03-08 00:59:38.552604 | orchestrator | 2026-03-08 00:59:38.552614 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-03-08 00:59:38.552620 | orchestrator | Sunday 08 March 2026 00:58:12 +0000 (0:00:00.198) 0:00:00.198 ********** 2026-03-08 00:59:38.552627 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-03-08 00:59:38.552634 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-08 00:59:38.552639 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-08 00:59:38.552645 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-03-08 00:59:38.552651 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-08 00:59:38.552657 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-03-08 00:59:38.552663 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-03-08 00:59:38.552668 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-03-08 00:59:38.552697 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-03-08 00:59:38.552703 | orchestrator | 2026-03-08 00:59:38.552712 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-03-08 00:59:38.552722 | orchestrator | Sunday 08 March 2026 00:58:17 +0000 (0:00:04.141) 0:00:04.339 ********** 2026-03-08 00:59:38.552737 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-03-08 00:59:38.552748 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-08 00:59:38.552757 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-08 00:59:38.552766 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-03-08 00:59:38.552775 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-08 00:59:38.552783 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-03-08 00:59:38.552792 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-03-08 00:59:38.552920 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-03-08 00:59:38.552932 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-03-08 00:59:38.552941 | orchestrator | 2026-03-08 00:59:38.552999 | orchestrator | TASK [Create share directory] ************************************************** 2026-03-08 00:59:38.553007 | orchestrator | Sunday 08 March 2026 00:58:21 +0000 (0:00:04.329) 0:00:08.669 ********** 2026-03-08 00:59:38.553014 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-08 00:59:38.553020 | orchestrator | 2026-03-08 00:59:38.553092 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-03-08 00:59:38.553099 | orchestrator | Sunday 08 March 2026 00:58:22 +0000 (0:00:01.052) 0:00:09.721 ********** 2026-03-08 00:59:38.553105 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-03-08 00:59:38.553111 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-08 00:59:38.553117 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-08 00:59:38.553123 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-03-08 00:59:38.553132 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-08 00:59:38.553142 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-03-08 00:59:38.553151 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-03-08 00:59:38.553161 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-03-08 00:59:38.553171 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-03-08 00:59:38.553225 | orchestrator | 2026-03-08 00:59:38.553233 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-03-08 00:59:38.553239 | orchestrator | Sunday 08 March 2026 00:58:36 +0000 (0:00:14.519) 0:00:24.241 ********** 2026-03-08 00:59:38.553244 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-03-08 00:59:38.553250 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-03-08 00:59:38.553256 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-03-08 00:59:38.553267 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-03-08 00:59:38.553321 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-03-08 00:59:38.553334 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-03-08 00:59:38.553344 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-03-08 00:59:38.553421 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-03-08 00:59:38.553432 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-03-08 00:59:38.553442 | orchestrator | 2026-03-08 00:59:38.553452 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-03-08 00:59:38.553462 | orchestrator | Sunday 08 March 2026 00:58:40 +0000 (0:00:03.212) 0:00:27.454 ********** 2026-03-08 00:59:38.553541 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-03-08 00:59:38.553553 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-08 00:59:38.553564 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-08 00:59:38.553573 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-03-08 00:59:38.553583 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-08 00:59:38.553593 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-03-08 00:59:38.553604 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-03-08 00:59:38.553615 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-03-08 00:59:38.553625 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-03-08 00:59:38.553636 | orchestrator | 2026-03-08 00:59:38.553646 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:59:38.553656 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 00:59:38.553667 | orchestrator | 2026-03-08 00:59:38.553677 | orchestrator | 2026-03-08 00:59:38.553688 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 00:59:38.553698 | orchestrator | Sunday 08 March 2026 00:58:47 +0000 (0:00:07.153) 0:00:34.607 ********** 2026-03-08 00:59:38.553707 | orchestrator | =============================================================================== 2026-03-08 00:59:38.553717 | orchestrator | Write ceph keys to the share directory --------------------------------- 14.52s 2026-03-08 00:59:38.553727 | orchestrator | Write ceph keys to the configuration directory -------------------------- 7.15s 2026-03-08 00:59:38.553737 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.33s 2026-03-08 00:59:38.553747 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.14s 2026-03-08 00:59:38.553757 | orchestrator | Check if target directories exist --------------------------------------- 3.21s 2026-03-08 00:59:38.553767 | orchestrator | Create share directory -------------------------------------------------- 1.05s 2026-03-08 00:59:38.553777 | orchestrator | 2026-03-08 00:59:38.554351 | orchestrator | 2026-03-08 00:59:38.554674 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-08 00:59:38.554685 | orchestrator | 2026-03-08 00:59:38.554690 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-08 00:59:38.554696 | orchestrator | Sunday 08 March 2026 00:56:58 +0000 (0:00:00.266) 0:00:00.266 ********** 2026-03-08 00:59:38.554702 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:59:38.554708 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:59:38.554713 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:59:38.554719 | orchestrator | 2026-03-08 00:59:38.554724 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-08 00:59:38.554730 | orchestrator | Sunday 08 March 2026 00:56:58 +0000 (0:00:00.298) 0:00:00.564 ********** 2026-03-08 00:59:38.554744 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-03-08 00:59:38.554750 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-03-08 00:59:38.554756 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-03-08 00:59:38.554761 | orchestrator | 2026-03-08 00:59:38.554767 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-03-08 00:59:38.554772 | orchestrator | 2026-03-08 00:59:38.554778 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-08 00:59:38.554783 | orchestrator | Sunday 08 March 2026 00:56:58 +0000 (0:00:00.450) 0:00:01.015 ********** 2026-03-08 00:59:38.554789 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:59:38.554795 | orchestrator | 2026-03-08 00:59:38.554801 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-03-08 00:59:38.554806 | orchestrator | Sunday 08 March 2026 00:56:59 +0000 (0:00:00.573) 0:00:01.589 ********** 2026-03-08 00:59:38.554822 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-08 00:59:38.554832 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-08 00:59:38.554860 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-08 00:59:38.554873 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-08 00:59:38.554881 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-08 00:59:38.554890 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-08 00:59:38.554897 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-08 00:59:38.554903 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-08 00:59:38.554909 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-08 00:59:38.554915 | orchestrator | 2026-03-08 00:59:38.554920 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-03-08 00:59:38.554926 | orchestrator | Sunday 08 March 2026 00:57:01 +0000 (0:00:01.959) 0:00:03.548 ********** 2026-03-08 00:59:38.554936 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:59:38.554941 | orchestrator | 2026-03-08 00:59:38.554950 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-03-08 00:59:38.554955 | orchestrator | Sunday 08 March 2026 00:57:01 +0000 (0:00:00.131) 0:00:03.680 ********** 2026-03-08 00:59:38.554961 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:59:38.554966 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:59:38.554972 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:59:38.554979 | orchestrator | 2026-03-08 00:59:38.554987 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-03-08 00:59:38.554996 | orchestrator | Sunday 08 March 2026 00:57:02 +0000 (0:00:00.430) 0:00:04.111 ********** 2026-03-08 00:59:38.555006 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-08 00:59:38.555015 | orchestrator | 2026-03-08 00:59:38.555024 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-08 00:59:38.555033 | orchestrator | Sunday 08 March 2026 00:57:02 +0000 (0:00:00.865) 0:00:04.977 ********** 2026-03-08 00:59:38.555042 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:59:38.555052 | orchestrator | 2026-03-08 00:59:38.555061 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-03-08 00:59:38.555070 | orchestrator | Sunday 08 March 2026 00:57:03 +0000 (0:00:00.552) 0:00:05.530 ********** 2026-03-08 00:59:38.555085 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-08 00:59:38.555095 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-08 00:59:38.555106 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-08 00:59:38.555130 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-08 00:59:38.555137 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-08 00:59:38.555143 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-08 00:59:38.555151 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-08 00:59:38.555157 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-08 00:59:38.555163 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-08 00:59:38.555173 | orchestrator | 2026-03-08 00:59:38.555178 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-03-08 00:59:38.555184 | orchestrator | Sunday 08 March 2026 00:57:07 +0000 (0:00:03.585) 0:00:09.115 ********** 2026-03-08 00:59:38.555195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-08 00:59:38.555201 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-08 00:59:38.555207 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-08 00:59:38.555213 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:59:38.555222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-08 00:59:38.555228 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-08 00:59:38.555248 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-08 00:59:38.555254 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-08 00:59:38.555259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-08 00:59:38.555265 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:59:38.555273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-08 00:59:38.555279 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:59:38.555285 | orchestrator | 2026-03-08 00:59:38.555290 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-03-08 00:59:38.555297 | orchestrator | Sunday 08 March 2026 00:57:07 +0000 (0:00:00.577) 0:00:09.693 ********** 2026-03-08 00:59:38.555303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-08 00:59:38.555314 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-08 00:59:38.555325 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-08 00:59:38.555332 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:59:38.555339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-08 00:59:38.555348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-08 00:59:38.555355 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-08 00:59:38.555365 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:59:38.555372 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-08 00:59:38.555427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-08 00:59:38.555434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-08 00:59:38.555440 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:59:38.555446 | orchestrator | 2026-03-08 00:59:38.555453 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-03-08 00:59:38.555459 | orchestrator | Sunday 08 March 2026 00:57:08 +0000 (0:00:00.774) 0:00:10.468 ********** 2026-03-08 00:59:38.555469 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-08 00:59:38.555480 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-08 00:59:38.555492 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-08 00:59:38.555499 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-08 00:59:38.555506 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-08 00:59:38.555515 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-08 00:59:38.555526 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-08 00:59:38.555533 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-08 00:59:38.555540 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-08 00:59:38.555546 | orchestrator | 2026-03-08 00:59:38.555552 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-03-08 00:59:38.555559 | orchestrator | Sunday 08 March 2026 00:57:11 +0000 (0:00:03.515) 0:00:13.983 ********** 2026-03-08 00:59:38.555570 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-08 00:59:38.555577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-08 00:59:38.555586 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-08 00:59:38.555599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-08 00:59:38.555609 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-08 00:59:38.555616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-08 00:59:38.555623 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-08 00:59:38.555632 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-08 00:59:38.555643 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-08 00:59:38.555649 | orchestrator | 2026-03-08 00:59:38.555656 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-03-08 00:59:38.555662 | orchestrator | Sunday 08 March 2026 00:57:17 +0000 (0:00:05.893) 0:00:19.877 ********** 2026-03-08 00:59:38.555668 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:59:38.555673 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:59:38.555679 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:59:38.555684 | orchestrator | 2026-03-08 00:59:38.555690 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-03-08 00:59:38.555695 | orchestrator | Sunday 08 March 2026 00:57:19 +0000 (0:00:01.662) 0:00:21.539 ********** 2026-03-08 00:59:38.555701 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:59:38.555706 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:59:38.555712 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:59:38.555717 | orchestrator | 2026-03-08 00:59:38.555722 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-03-08 00:59:38.555728 | orchestrator | Sunday 08 March 2026 00:57:20 +0000 (0:00:00.583) 0:00:22.122 ********** 2026-03-08 00:59:38.555733 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:59:38.555739 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:59:38.555744 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:59:38.555749 | orchestrator | 2026-03-08 00:59:38.555755 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-03-08 00:59:38.555760 | orchestrator | Sunday 08 March 2026 00:57:20 +0000 (0:00:00.340) 0:00:22.463 ********** 2026-03-08 00:59:38.555766 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:59:38.555771 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:59:38.555776 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:59:38.555782 | orchestrator | 2026-03-08 00:59:38.555787 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-03-08 00:59:38.555793 | orchestrator | Sunday 08 March 2026 00:57:20 +0000 (0:00:00.537) 0:00:23.001 ********** 2026-03-08 00:59:38.555804 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-08 00:59:38.555810 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-08 00:59:38.555822 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-08 00:59:38.555828 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:59:38.555834 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-08 00:59:38.555841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-08 00:59:38.555850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-08 00:59:38.555856 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:59:38.555862 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-08 00:59:38.555873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-08 00:59:38.555882 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-08 00:59:38.555888 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:59:38.555893 | orchestrator | 2026-03-08 00:59:38.555899 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-08 00:59:38.555904 | orchestrator | Sunday 08 March 2026 00:57:21 +0000 (0:00:00.627) 0:00:23.628 ********** 2026-03-08 00:59:38.555910 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:59:38.555915 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:59:38.555921 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:59:38.555926 | orchestrator | 2026-03-08 00:59:38.555932 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-03-08 00:59:38.555937 | orchestrator | Sunday 08 March 2026 00:57:21 +0000 (0:00:00.286) 0:00:23.915 ********** 2026-03-08 00:59:38.555942 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-08 00:59:38.555948 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-08 00:59:38.555954 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-08 00:59:38.555959 | orchestrator | 2026-03-08 00:59:38.555964 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-03-08 00:59:38.555970 | orchestrator | Sunday 08 March 2026 00:57:23 +0000 (0:00:01.609) 0:00:25.524 ********** 2026-03-08 00:59:38.555975 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-08 00:59:38.555981 | orchestrator | 2026-03-08 00:59:38.555986 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-03-08 00:59:38.555991 | orchestrator | Sunday 08 March 2026 00:57:24 +0000 (0:00:01.037) 0:00:26.562 ********** 2026-03-08 00:59:38.555997 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:59:38.556002 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:59:38.556008 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:59:38.556013 | orchestrator | 2026-03-08 00:59:38.556018 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-03-08 00:59:38.556024 | orchestrator | Sunday 08 March 2026 00:57:25 +0000 (0:00:01.061) 0:00:27.623 ********** 2026-03-08 00:59:38.556032 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-08 00:59:38.556038 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-08 00:59:38.556043 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-08 00:59:38.556049 | orchestrator | 2026-03-08 00:59:38.556054 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-03-08 00:59:38.556063 | orchestrator | Sunday 08 March 2026 00:57:26 +0000 (0:00:01.165) 0:00:28.788 ********** 2026-03-08 00:59:38.556069 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:59:38.556075 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:59:38.556080 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:59:38.556086 | orchestrator | 2026-03-08 00:59:38.556091 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-03-08 00:59:38.556097 | orchestrator | Sunday 08 March 2026 00:57:27 +0000 (0:00:00.300) 0:00:29.088 ********** 2026-03-08 00:59:38.556102 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-08 00:59:38.556107 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-08 00:59:38.556113 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-08 00:59:38.556118 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-08 00:59:38.556123 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-08 00:59:38.556129 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-08 00:59:38.556134 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-08 00:59:38.556140 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-08 00:59:38.556145 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-08 00:59:38.556151 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-08 00:59:38.556156 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-08 00:59:38.556161 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-08 00:59:38.556167 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-08 00:59:38.556172 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-08 00:59:38.556180 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-08 00:59:38.556186 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-08 00:59:38.556192 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-08 00:59:38.556197 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-08 00:59:38.556203 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-08 00:59:38.556208 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-08 00:59:38.556214 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-08 00:59:38.556219 | orchestrator | 2026-03-08 00:59:38.556225 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-03-08 00:59:38.556230 | orchestrator | Sunday 08 March 2026 00:57:36 +0000 (0:00:09.112) 0:00:38.201 ********** 2026-03-08 00:59:38.556235 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-08 00:59:38.556241 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-08 00:59:38.556249 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-08 00:59:38.556255 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-08 00:59:38.556260 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-08 00:59:38.556266 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-08 00:59:38.556271 | orchestrator | 2026-03-08 00:59:38.556276 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2026-03-08 00:59:38.556282 | orchestrator | Sunday 08 March 2026 00:57:39 +0000 (0:00:02.905) 0:00:41.107 ********** 2026-03-08 00:59:38.556291 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-08 00:59:38.556297 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-08 00:59:38.556306 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-08 00:59:38.556312 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-08 00:59:38.556322 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-08 00:59:38.556328 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-08 00:59:38.556338 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-08 00:59:38.556344 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-08 00:59:38.556349 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-08 00:59:38.556355 | orchestrator | 2026-03-08 00:59:38.556363 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-08 00:59:38.556369 | orchestrator | Sunday 08 March 2026 00:57:41 +0000 (0:00:02.540) 0:00:43.647 ********** 2026-03-08 00:59:38.556374 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:59:38.556399 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:59:38.556405 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:59:38.556414 | orchestrator | 2026-03-08 00:59:38.556420 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-03-08 00:59:38.556425 | orchestrator | Sunday 08 March 2026 00:57:41 +0000 (0:00:00.312) 0:00:43.960 ********** 2026-03-08 00:59:38.556431 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:59:38.556436 | orchestrator | 2026-03-08 00:59:38.556442 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-03-08 00:59:38.556447 | orchestrator | Sunday 08 March 2026 00:57:44 +0000 (0:00:02.603) 0:00:46.564 ********** 2026-03-08 00:59:38.556453 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:59:38.556458 | orchestrator | 2026-03-08 00:59:38.556463 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-03-08 00:59:38.556469 | orchestrator | Sunday 08 March 2026 00:57:46 +0000 (0:00:02.456) 0:00:49.020 ********** 2026-03-08 00:59:38.556474 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:59:38.556480 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:59:38.556485 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:59:38.556491 | orchestrator | 2026-03-08 00:59:38.556496 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-03-08 00:59:38.556502 | orchestrator | Sunday 08 March 2026 00:57:47 +0000 (0:00:01.039) 0:00:50.059 ********** 2026-03-08 00:59:38.556507 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:59:38.556512 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:59:38.556518 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:59:38.556523 | orchestrator | 2026-03-08 00:59:38.556529 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-03-08 00:59:38.556534 | orchestrator | Sunday 08 March 2026 00:57:48 +0000 (0:00:00.331) 0:00:50.391 ********** 2026-03-08 00:59:38.556540 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:59:38.556545 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:59:38.556550 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:59:38.556556 | orchestrator | 2026-03-08 00:59:38.556561 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-03-08 00:59:38.556567 | orchestrator | Sunday 08 March 2026 00:57:48 +0000 (0:00:00.322) 0:00:50.713 ********** 2026-03-08 00:59:38.556572 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:59:38.556578 | orchestrator | 2026-03-08 00:59:38.556583 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-03-08 00:59:38.556589 | orchestrator | Sunday 08 March 2026 00:58:04 +0000 (0:00:16.360) 0:01:07.074 ********** 2026-03-08 00:59:38.556594 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:59:38.556600 | orchestrator | 2026-03-08 00:59:38.556605 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-08 00:59:38.556611 | orchestrator | Sunday 08 March 2026 00:58:15 +0000 (0:00:10.963) 0:01:18.038 ********** 2026-03-08 00:59:38.556616 | orchestrator | 2026-03-08 00:59:38.556621 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-08 00:59:38.556627 | orchestrator | Sunday 08 March 2026 00:58:16 +0000 (0:00:00.064) 0:01:18.103 ********** 2026-03-08 00:59:38.556632 | orchestrator | 2026-03-08 00:59:38.556638 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-08 00:59:38.556647 | orchestrator | Sunday 08 March 2026 00:58:16 +0000 (0:00:00.062) 0:01:18.165 ********** 2026-03-08 00:59:38.556652 | orchestrator | 2026-03-08 00:59:38.556658 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-03-08 00:59:38.556663 | orchestrator | Sunday 08 March 2026 00:58:16 +0000 (0:00:00.064) 0:01:18.230 ********** 2026-03-08 00:59:38.556669 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:59:38.556674 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:59:38.556680 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:59:38.556685 | orchestrator | 2026-03-08 00:59:38.556690 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-03-08 00:59:38.556696 | orchestrator | Sunday 08 March 2026 00:58:26 +0000 (0:00:10.651) 0:01:28.881 ********** 2026-03-08 00:59:38.556701 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:59:38.556710 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:59:38.556715 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:59:38.556721 | orchestrator | 2026-03-08 00:59:38.556726 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-03-08 00:59:38.556732 | orchestrator | Sunday 08 March 2026 00:58:31 +0000 (0:00:04.589) 0:01:33.471 ********** 2026-03-08 00:59:38.556737 | orchestrator | changed: [testbed-node-1] 2026-03-08 00:59:38.556743 | orchestrator | changed: [testbed-node-2] 2026-03-08 00:59:38.556748 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:59:38.556754 | orchestrator | 2026-03-08 00:59:38.556759 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-08 00:59:38.556765 | orchestrator | Sunday 08 March 2026 00:58:39 +0000 (0:00:07.643) 0:01:41.114 ********** 2026-03-08 00:59:38.556770 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 00:59:38.556776 | orchestrator | 2026-03-08 00:59:38.556781 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-03-08 00:59:38.556787 | orchestrator | Sunday 08 March 2026 00:58:39 +0000 (0:00:00.751) 0:01:41.866 ********** 2026-03-08 00:59:38.556792 | orchestrator | ok: [testbed-node-1] 2026-03-08 00:59:38.556798 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:59:38.556803 | orchestrator | ok: [testbed-node-2] 2026-03-08 00:59:38.556808 | orchestrator | 2026-03-08 00:59:38.556814 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-03-08 00:59:38.556819 | orchestrator | Sunday 08 March 2026 00:58:40 +0000 (0:00:00.795) 0:01:42.661 ********** 2026-03-08 00:59:38.556825 | orchestrator | changed: [testbed-node-0] 2026-03-08 00:59:38.556830 | orchestrator | 2026-03-08 00:59:38.556836 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-03-08 00:59:38.556841 | orchestrator | Sunday 08 March 2026 00:58:42 +0000 (0:00:01.700) 0:01:44.362 ********** 2026-03-08 00:59:38.556850 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-03-08 00:59:38.556855 | orchestrator | 2026-03-08 00:59:38.556861 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2026-03-08 00:59:38.556867 | orchestrator | Sunday 08 March 2026 00:58:55 +0000 (0:00:13.575) 0:01:57.937 ********** 2026-03-08 00:59:38.556872 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-03-08 00:59:38.556878 | orchestrator | 2026-03-08 00:59:38.556883 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2026-03-08 00:59:38.556888 | orchestrator | Sunday 08 March 2026 00:59:24 +0000 (0:00:28.982) 0:02:26.919 ********** 2026-03-08 00:59:38.556894 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-03-08 00:59:38.556899 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-03-08 00:59:38.556905 | orchestrator | 2026-03-08 00:59:38.556910 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-03-08 00:59:38.556916 | orchestrator | Sunday 08 March 2026 00:59:32 +0000 (0:00:07.535) 0:02:34.455 ********** 2026-03-08 00:59:38.556921 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:59:38.556927 | orchestrator | 2026-03-08 00:59:38.556932 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-03-08 00:59:38.556938 | orchestrator | Sunday 08 March 2026 00:59:32 +0000 (0:00:00.135) 0:02:34.590 ********** 2026-03-08 00:59:38.556943 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:59:38.556948 | orchestrator | 2026-03-08 00:59:38.556954 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-03-08 00:59:38.556959 | orchestrator | Sunday 08 March 2026 00:59:32 +0000 (0:00:00.124) 0:02:34.714 ********** 2026-03-08 00:59:38.556965 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:59:38.556970 | orchestrator | 2026-03-08 00:59:38.556975 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2026-03-08 00:59:38.556981 | orchestrator | Sunday 08 March 2026 00:59:32 +0000 (0:00:00.146) 0:02:34.861 ********** 2026-03-08 00:59:38.556990 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:59:38.556995 | orchestrator | 2026-03-08 00:59:38.557001 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-03-08 00:59:38.557006 | orchestrator | Sunday 08 March 2026 00:59:33 +0000 (0:00:00.506) 0:02:35.368 ********** 2026-03-08 00:59:38.557012 | orchestrator | ok: [testbed-node-0] 2026-03-08 00:59:38.557017 | orchestrator | 2026-03-08 00:59:38.557023 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-08 00:59:38.557028 | orchestrator | Sunday 08 March 2026 00:59:36 +0000 (0:00:03.502) 0:02:38.870 ********** 2026-03-08 00:59:38.557034 | orchestrator | skipping: [testbed-node-0] 2026-03-08 00:59:38.557039 | orchestrator | skipping: [testbed-node-1] 2026-03-08 00:59:38.557045 | orchestrator | skipping: [testbed-node-2] 2026-03-08 00:59:38.557050 | orchestrator | 2026-03-08 00:59:38.557055 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 00:59:38.557062 | orchestrator | testbed-node-0 : ok=33  changed=19  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-08 00:59:38.557070 | orchestrator | testbed-node-1 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-08 00:59:38.557076 | orchestrator | testbed-node-2 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-08 00:59:38.557082 | orchestrator | 2026-03-08 00:59:38.557087 | orchestrator | 2026-03-08 00:59:38.557093 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 00:59:38.557098 | orchestrator | Sunday 08 March 2026 00:59:37 +0000 (0:00:00.430) 0:02:39.301 ********** 2026-03-08 00:59:38.557113 | orchestrator | =============================================================================== 2026-03-08 00:59:38.557126 | orchestrator | service-ks-register : keystone | Creating services --------------------- 28.98s 2026-03-08 00:59:38.557131 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 16.36s 2026-03-08 00:59:38.557137 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 13.57s 2026-03-08 00:59:38.557142 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 10.96s 2026-03-08 00:59:38.557148 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 10.65s 2026-03-08 00:59:38.557153 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.11s 2026-03-08 00:59:38.557159 | orchestrator | keystone : Restart keystone container ----------------------------------- 7.64s 2026-03-08 00:59:38.557164 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 7.54s 2026-03-08 00:59:38.557169 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.89s 2026-03-08 00:59:38.557175 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 4.59s 2026-03-08 00:59:38.557180 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.59s 2026-03-08 00:59:38.557186 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.52s 2026-03-08 00:59:38.557191 | orchestrator | keystone : Creating default user role ----------------------------------- 3.50s 2026-03-08 00:59:38.557197 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.91s 2026-03-08 00:59:38.557202 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.60s 2026-03-08 00:59:38.557207 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.54s 2026-03-08 00:59:38.557213 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.46s 2026-03-08 00:59:38.557221 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.96s 2026-03-08 00:59:38.557227 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.70s 2026-03-08 00:59:38.557232 | orchestrator | keystone : Copying keystone-startup script for keystone ----------------- 1.66s 2026-03-08 00:59:38.557241 | orchestrator | 2026-03-08 00:59:38 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:59:41.615611 | orchestrator | 2026-03-08 00:59:41 | INFO  | Task f301f092-9dd4-463c-a4f6-38196b7efbf3 is in state STARTED 2026-03-08 00:59:41.615693 | orchestrator | 2026-03-08 00:59:41 | INFO  | Task e2a04a5f-900a-439c-886a-67b04fff7b15 is in state STARTED 2026-03-08 00:59:41.615706 | orchestrator | 2026-03-08 00:59:41 | INFO  | Task 8459e362-f543-4453-85e7-046962e5a217 is in state STARTED 2026-03-08 00:59:41.615711 | orchestrator | 2026-03-08 00:59:41 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 00:59:41.615718 | orchestrator | 2026-03-08 00:59:41 | INFO  | Task 481c1a3a-0de1-45bb-a274-19d73dea546a is in state STARTED 2026-03-08 00:59:41.615726 | orchestrator | 2026-03-08 00:59:41 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:59:44.648635 | orchestrator | 2026-03-08 00:59:44 | INFO  | Task f301f092-9dd4-463c-a4f6-38196b7efbf3 is in state STARTED 2026-03-08 00:59:44.648737 | orchestrator | 2026-03-08 00:59:44 | INFO  | Task e2a04a5f-900a-439c-886a-67b04fff7b15 is in state STARTED 2026-03-08 00:59:44.648749 | orchestrator | 2026-03-08 00:59:44 | INFO  | Task 8459e362-f543-4453-85e7-046962e5a217 is in state STARTED 2026-03-08 00:59:44.648758 | orchestrator | 2026-03-08 00:59:44 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 00:59:44.648766 | orchestrator | 2026-03-08 00:59:44 | INFO  | Task 481c1a3a-0de1-45bb-a274-19d73dea546a is in state STARTED 2026-03-08 00:59:44.648774 | orchestrator | 2026-03-08 00:59:44 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:59:47.681833 | orchestrator | 2026-03-08 00:59:47 | INFO  | Task f301f092-9dd4-463c-a4f6-38196b7efbf3 is in state STARTED 2026-03-08 00:59:47.682590 | orchestrator | 2026-03-08 00:59:47 | INFO  | Task e2a04a5f-900a-439c-886a-67b04fff7b15 is in state STARTED 2026-03-08 00:59:47.684046 | orchestrator | 2026-03-08 00:59:47 | INFO  | Task 8459e362-f543-4453-85e7-046962e5a217 is in state STARTED 2026-03-08 00:59:47.685036 | orchestrator | 2026-03-08 00:59:47 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 00:59:47.688691 | orchestrator | 2026-03-08 00:59:47 | INFO  | Task 481c1a3a-0de1-45bb-a274-19d73dea546a is in state STARTED 2026-03-08 00:59:47.689276 | orchestrator | 2026-03-08 00:59:47 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:59:50.740727 | orchestrator | 2026-03-08 00:59:50 | INFO  | Task f301f092-9dd4-463c-a4f6-38196b7efbf3 is in state STARTED 2026-03-08 00:59:50.744697 | orchestrator | 2026-03-08 00:59:50 | INFO  | Task e2a04a5f-900a-439c-886a-67b04fff7b15 is in state STARTED 2026-03-08 00:59:50.747526 | orchestrator | 2026-03-08 00:59:50 | INFO  | Task 8459e362-f543-4453-85e7-046962e5a217 is in state SUCCESS 2026-03-08 00:59:50.749438 | orchestrator | 2026-03-08 00:59:50 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 00:59:50.752719 | orchestrator | 2026-03-08 00:59:50 | INFO  | Task 481c1a3a-0de1-45bb-a274-19d73dea546a is in state STARTED 2026-03-08 00:59:50.756117 | orchestrator | 2026-03-08 00:59:50 | INFO  | Task 167108bf-ec9e-4c1f-9d9e-52193cfc91c5 is in state STARTED 2026-03-08 00:59:50.756737 | orchestrator | 2026-03-08 00:59:50 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:59:53.811839 | orchestrator | 2026-03-08 00:59:53 | INFO  | Task f301f092-9dd4-463c-a4f6-38196b7efbf3 is in state STARTED 2026-03-08 00:59:53.812787 | orchestrator | 2026-03-08 00:59:53 | INFO  | Task e2a04a5f-900a-439c-886a-67b04fff7b15 is in state STARTED 2026-03-08 00:59:53.813832 | orchestrator | 2026-03-08 00:59:53 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 00:59:53.815144 | orchestrator | 2026-03-08 00:59:53 | INFO  | Task 481c1a3a-0de1-45bb-a274-19d73dea546a is in state STARTED 2026-03-08 00:59:53.816255 | orchestrator | 2026-03-08 00:59:53 | INFO  | Task 167108bf-ec9e-4c1f-9d9e-52193cfc91c5 is in state STARTED 2026-03-08 00:59:53.816284 | orchestrator | 2026-03-08 00:59:53 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:59:56.851474 | orchestrator | 2026-03-08 00:59:56 | INFO  | Task f301f092-9dd4-463c-a4f6-38196b7efbf3 is in state STARTED 2026-03-08 00:59:56.852808 | orchestrator | 2026-03-08 00:59:56 | INFO  | Task e2a04a5f-900a-439c-886a-67b04fff7b15 is in state STARTED 2026-03-08 00:59:56.854494 | orchestrator | 2026-03-08 00:59:56 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 00:59:56.855892 | orchestrator | 2026-03-08 00:59:56 | INFO  | Task 481c1a3a-0de1-45bb-a274-19d73dea546a is in state STARTED 2026-03-08 00:59:56.857533 | orchestrator | 2026-03-08 00:59:56 | INFO  | Task 167108bf-ec9e-4c1f-9d9e-52193cfc91c5 is in state STARTED 2026-03-08 00:59:56.857577 | orchestrator | 2026-03-08 00:59:56 | INFO  | Wait 1 second(s) until the next check 2026-03-08 00:59:59.903615 | orchestrator | 2026-03-08 00:59:59 | INFO  | Task f301f092-9dd4-463c-a4f6-38196b7efbf3 is in state STARTED 2026-03-08 00:59:59.905538 | orchestrator | 2026-03-08 00:59:59 | INFO  | Task e2a04a5f-900a-439c-886a-67b04fff7b15 is in state STARTED 2026-03-08 00:59:59.907527 | orchestrator | 2026-03-08 00:59:59 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 00:59:59.907886 | orchestrator | 2026-03-08 00:59:59 | INFO  | Task 481c1a3a-0de1-45bb-a274-19d73dea546a is in state STARTED 2026-03-08 00:59:59.909083 | orchestrator | 2026-03-08 00:59:59 | INFO  | Task 167108bf-ec9e-4c1f-9d9e-52193cfc91c5 is in state STARTED 2026-03-08 00:59:59.909115 | orchestrator | 2026-03-08 00:59:59 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:00:02.950001 | orchestrator | 2026-03-08 01:00:02 | INFO  | Task f301f092-9dd4-463c-a4f6-38196b7efbf3 is in state STARTED 2026-03-08 01:00:02.951719 | orchestrator | 2026-03-08 01:00:02 | INFO  | Task e2a04a5f-900a-439c-886a-67b04fff7b15 is in state STARTED 2026-03-08 01:00:02.952933 | orchestrator | 2026-03-08 01:00:02 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:00:02.954544 | orchestrator | 2026-03-08 01:00:02 | INFO  | Task 481c1a3a-0de1-45bb-a274-19d73dea546a is in state STARTED 2026-03-08 01:00:02.955963 | orchestrator | 2026-03-08 01:00:02 | INFO  | Task 167108bf-ec9e-4c1f-9d9e-52193cfc91c5 is in state STARTED 2026-03-08 01:00:02.956307 | orchestrator | 2026-03-08 01:00:02 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:00:06.011210 | orchestrator | 2026-03-08 01:00:06 | INFO  | Task f301f092-9dd4-463c-a4f6-38196b7efbf3 is in state STARTED 2026-03-08 01:00:06.013436 | orchestrator | 2026-03-08 01:00:06 | INFO  | Task e2a04a5f-900a-439c-886a-67b04fff7b15 is in state STARTED 2026-03-08 01:00:06.015224 | orchestrator | 2026-03-08 01:00:06 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:00:06.018228 | orchestrator | 2026-03-08 01:00:06 | INFO  | Task 481c1a3a-0de1-45bb-a274-19d73dea546a is in state STARTED 2026-03-08 01:00:06.019436 | orchestrator | 2026-03-08 01:00:06 | INFO  | Task 167108bf-ec9e-4c1f-9d9e-52193cfc91c5 is in state STARTED 2026-03-08 01:00:06.019678 | orchestrator | 2026-03-08 01:00:06 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:00:09.068351 | orchestrator | 2026-03-08 01:00:09 | INFO  | Task f301f092-9dd4-463c-a4f6-38196b7efbf3 is in state STARTED 2026-03-08 01:00:09.070137 | orchestrator | 2026-03-08 01:00:09 | INFO  | Task e2a04a5f-900a-439c-886a-67b04fff7b15 is in state STARTED 2026-03-08 01:00:09.071947 | orchestrator | 2026-03-08 01:00:09 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:00:09.073474 | orchestrator | 2026-03-08 01:00:09 | INFO  | Task 481c1a3a-0de1-45bb-a274-19d73dea546a is in state STARTED 2026-03-08 01:00:09.075127 | orchestrator | 2026-03-08 01:00:09 | INFO  | Task 167108bf-ec9e-4c1f-9d9e-52193cfc91c5 is in state STARTED 2026-03-08 01:00:09.075397 | orchestrator | 2026-03-08 01:00:09 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:00:12.116557 | orchestrator | 2026-03-08 01:00:12 | INFO  | Task f301f092-9dd4-463c-a4f6-38196b7efbf3 is in state STARTED 2026-03-08 01:00:12.118390 | orchestrator | 2026-03-08 01:00:12 | INFO  | Task e2a04a5f-900a-439c-886a-67b04fff7b15 is in state STARTED 2026-03-08 01:00:12.120105 | orchestrator | 2026-03-08 01:00:12 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:00:12.122110 | orchestrator | 2026-03-08 01:00:12 | INFO  | Task 481c1a3a-0de1-45bb-a274-19d73dea546a is in state STARTED 2026-03-08 01:00:12.123716 | orchestrator | 2026-03-08 01:00:12 | INFO  | Task 167108bf-ec9e-4c1f-9d9e-52193cfc91c5 is in state STARTED 2026-03-08 01:00:12.123757 | orchestrator | 2026-03-08 01:00:12 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:00:15.184628 | orchestrator | 2026-03-08 01:00:15 | INFO  | Task f301f092-9dd4-463c-a4f6-38196b7efbf3 is in state STARTED 2026-03-08 01:00:15.187595 | orchestrator | 2026-03-08 01:00:15 | INFO  | Task e2a04a5f-900a-439c-886a-67b04fff7b15 is in state STARTED 2026-03-08 01:00:15.188567 | orchestrator | 2026-03-08 01:00:15 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:00:15.190568 | orchestrator | 2026-03-08 01:00:15 | INFO  | Task 481c1a3a-0de1-45bb-a274-19d73dea546a is in state STARTED 2026-03-08 01:00:15.191996 | orchestrator | 2026-03-08 01:00:15 | INFO  | Task 167108bf-ec9e-4c1f-9d9e-52193cfc91c5 is in state STARTED 2026-03-08 01:00:15.192041 | orchestrator | 2026-03-08 01:00:15 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:00:18.240632 | orchestrator | 2026-03-08 01:00:18 | INFO  | Task f301f092-9dd4-463c-a4f6-38196b7efbf3 is in state STARTED 2026-03-08 01:00:18.241952 | orchestrator | 2026-03-08 01:00:18 | INFO  | Task e2a04a5f-900a-439c-886a-67b04fff7b15 is in state STARTED 2026-03-08 01:00:18.243710 | orchestrator | 2026-03-08 01:00:18 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:00:18.245088 | orchestrator | 2026-03-08 01:00:18 | INFO  | Task 481c1a3a-0de1-45bb-a274-19d73dea546a is in state STARTED 2026-03-08 01:00:18.246428 | orchestrator | 2026-03-08 01:00:18 | INFO  | Task 167108bf-ec9e-4c1f-9d9e-52193cfc91c5 is in state STARTED 2026-03-08 01:00:18.246462 | orchestrator | 2026-03-08 01:00:18 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:00:21.287572 | orchestrator | 2026-03-08 01:00:21 | INFO  | Task f301f092-9dd4-463c-a4f6-38196b7efbf3 is in state STARTED 2026-03-08 01:00:21.289595 | orchestrator | 2026-03-08 01:00:21 | INFO  | Task e2a04a5f-900a-439c-886a-67b04fff7b15 is in state STARTED 2026-03-08 01:00:21.292238 | orchestrator | 2026-03-08 01:00:21 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:00:21.293801 | orchestrator | 2026-03-08 01:00:21 | INFO  | Task 481c1a3a-0de1-45bb-a274-19d73dea546a is in state STARTED 2026-03-08 01:00:21.295554 | orchestrator | 2026-03-08 01:00:21 | INFO  | Task 167108bf-ec9e-4c1f-9d9e-52193cfc91c5 is in state STARTED 2026-03-08 01:00:21.295649 | orchestrator | 2026-03-08 01:00:21 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:00:24.322702 | orchestrator | 2026-03-08 01:00:24 | INFO  | Task f301f092-9dd4-463c-a4f6-38196b7efbf3 is in state STARTED 2026-03-08 01:00:24.324160 | orchestrator | 2026-03-08 01:00:24 | INFO  | Task e2a04a5f-900a-439c-886a-67b04fff7b15 is in state STARTED 2026-03-08 01:00:24.324371 | orchestrator | 2026-03-08 01:00:24 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:00:24.324404 | orchestrator | 2026-03-08 01:00:24 | INFO  | Task 481c1a3a-0de1-45bb-a274-19d73dea546a is in state STARTED 2026-03-08 01:00:24.325262 | orchestrator | 2026-03-08 01:00:24 | INFO  | Task 167108bf-ec9e-4c1f-9d9e-52193cfc91c5 is in state STARTED 2026-03-08 01:00:24.325349 | orchestrator | 2026-03-08 01:00:24 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:00:27.354481 | orchestrator | 2026-03-08 01:00:27 | INFO  | Task f301f092-9dd4-463c-a4f6-38196b7efbf3 is in state STARTED 2026-03-08 01:00:27.354575 | orchestrator | 2026-03-08 01:00:27 | INFO  | Task e2a04a5f-900a-439c-886a-67b04fff7b15 is in state STARTED 2026-03-08 01:00:27.354596 | orchestrator | 2026-03-08 01:00:27 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:00:27.355230 | orchestrator | 2026-03-08 01:00:27 | INFO  | Task 481c1a3a-0de1-45bb-a274-19d73dea546a is in state STARTED 2026-03-08 01:00:27.355931 | orchestrator | 2026-03-08 01:00:27 | INFO  | Task 167108bf-ec9e-4c1f-9d9e-52193cfc91c5 is in state STARTED 2026-03-08 01:00:27.355970 | orchestrator | 2026-03-08 01:00:27 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:00:30.394138 | orchestrator | 2026-03-08 01:00:30 | INFO  | Task f301f092-9dd4-463c-a4f6-38196b7efbf3 is in state STARTED 2026-03-08 01:00:30.396023 | orchestrator | 2026-03-08 01:00:30 | INFO  | Task e2a04a5f-900a-439c-886a-67b04fff7b15 is in state STARTED 2026-03-08 01:00:30.397432 | orchestrator | 2026-03-08 01:00:30 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:00:30.398753 | orchestrator | 2026-03-08 01:00:30 | INFO  | Task 481c1a3a-0de1-45bb-a274-19d73dea546a is in state STARTED 2026-03-08 01:00:30.400520 | orchestrator | 2026-03-08 01:00:30 | INFO  | Task 167108bf-ec9e-4c1f-9d9e-52193cfc91c5 is in state STARTED 2026-03-08 01:00:30.400575 | orchestrator | 2026-03-08 01:00:30 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:00:33.429669 | orchestrator | 2026-03-08 01:00:33 | INFO  | Task f301f092-9dd4-463c-a4f6-38196b7efbf3 is in state STARTED 2026-03-08 01:00:33.430545 | orchestrator | 2026-03-08 01:00:33 | INFO  | Task e2a04a5f-900a-439c-886a-67b04fff7b15 is in state STARTED 2026-03-08 01:00:33.431507 | orchestrator | 2026-03-08 01:00:33 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:00:33.433160 | orchestrator | 2026-03-08 01:00:33 | INFO  | Task 481c1a3a-0de1-45bb-a274-19d73dea546a is in state STARTED 2026-03-08 01:00:33.433936 | orchestrator | 2026-03-08 01:00:33 | INFO  | Task 167108bf-ec9e-4c1f-9d9e-52193cfc91c5 is in state STARTED 2026-03-08 01:00:33.433982 | orchestrator | 2026-03-08 01:00:33 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:00:36.457316 | orchestrator | 2026-03-08 01:00:36 | INFO  | Task f301f092-9dd4-463c-a4f6-38196b7efbf3 is in state STARTED 2026-03-08 01:00:36.457552 | orchestrator | 2026-03-08 01:00:36 | INFO  | Task e2a04a5f-900a-439c-886a-67b04fff7b15 is in state STARTED 2026-03-08 01:00:36.458856 | orchestrator | 2026-03-08 01:00:36 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:00:36.459387 | orchestrator | 2026-03-08 01:00:36 | INFO  | Task 481c1a3a-0de1-45bb-a274-19d73dea546a is in state STARTED 2026-03-08 01:00:36.460007 | orchestrator | 2026-03-08 01:00:36 | INFO  | Task 167108bf-ec9e-4c1f-9d9e-52193cfc91c5 is in state STARTED 2026-03-08 01:00:36.460021 | orchestrator | 2026-03-08 01:00:36 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:00:39.488589 | orchestrator | 2026-03-08 01:00:39 | INFO  | Task f301f092-9dd4-463c-a4f6-38196b7efbf3 is in state STARTED 2026-03-08 01:00:39.489138 | orchestrator | 2026-03-08 01:00:39 | INFO  | Task e2a04a5f-900a-439c-886a-67b04fff7b15 is in state STARTED 2026-03-08 01:00:39.490101 | orchestrator | 2026-03-08 01:00:39 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:00:39.491019 | orchestrator | 2026-03-08 01:00:39 | INFO  | Task 481c1a3a-0de1-45bb-a274-19d73dea546a is in state STARTED 2026-03-08 01:00:39.491829 | orchestrator | 2026-03-08 01:00:39 | INFO  | Task 167108bf-ec9e-4c1f-9d9e-52193cfc91c5 is in state STARTED 2026-03-08 01:00:39.491870 | orchestrator | 2026-03-08 01:00:39 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:00:42.525490 | orchestrator | 2026-03-08 01:00:42 | INFO  | Task f301f092-9dd4-463c-a4f6-38196b7efbf3 is in state STARTED 2026-03-08 01:00:42.525670 | orchestrator | 2026-03-08 01:00:42 | INFO  | Task e2a04a5f-900a-439c-886a-67b04fff7b15 is in state STARTED 2026-03-08 01:00:42.526346 | orchestrator | 2026-03-08 01:00:42 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:00:42.526940 | orchestrator | 2026-03-08 01:00:42 | INFO  | Task 481c1a3a-0de1-45bb-a274-19d73dea546a is in state STARTED 2026-03-08 01:00:42.527636 | orchestrator | 2026-03-08 01:00:42 | INFO  | Task 167108bf-ec9e-4c1f-9d9e-52193cfc91c5 is in state STARTED 2026-03-08 01:00:42.527671 | orchestrator | 2026-03-08 01:00:42 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:00:45.581397 | orchestrator | 2026-03-08 01:00:45 | INFO  | Task f301f092-9dd4-463c-a4f6-38196b7efbf3 is in state STARTED 2026-03-08 01:00:45.581676 | orchestrator | 2026-03-08 01:00:45 | INFO  | Task e2a04a5f-900a-439c-886a-67b04fff7b15 is in state STARTED 2026-03-08 01:00:45.582362 | orchestrator | 2026-03-08 01:00:45 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:00:45.583116 | orchestrator | 2026-03-08 01:00:45 | INFO  | Task 481c1a3a-0de1-45bb-a274-19d73dea546a is in state STARTED 2026-03-08 01:00:45.583743 | orchestrator | 2026-03-08 01:00:45 | INFO  | Task 167108bf-ec9e-4c1f-9d9e-52193cfc91c5 is in state STARTED 2026-03-08 01:00:45.583782 | orchestrator | 2026-03-08 01:00:45 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:00:48.605540 | orchestrator | 2026-03-08 01:00:48 | INFO  | Task f301f092-9dd4-463c-a4f6-38196b7efbf3 is in state STARTED 2026-03-08 01:00:48.606098 | orchestrator | 2026-03-08 01:00:48 | INFO  | Task e2a04a5f-900a-439c-886a-67b04fff7b15 is in state STARTED 2026-03-08 01:00:48.606676 | orchestrator | 2026-03-08 01:00:48 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:00:48.607346 | orchestrator | 2026-03-08 01:00:48 | INFO  | Task 481c1a3a-0de1-45bb-a274-19d73dea546a is in state STARTED 2026-03-08 01:00:48.608137 | orchestrator | 2026-03-08 01:00:48 | INFO  | Task 167108bf-ec9e-4c1f-9d9e-52193cfc91c5 is in state STARTED 2026-03-08 01:00:48.608189 | orchestrator | 2026-03-08 01:00:48 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:00:51.631131 | orchestrator | 2026-03-08 01:00:51 | INFO  | Task f301f092-9dd4-463c-a4f6-38196b7efbf3 is in state STARTED 2026-03-08 01:00:51.631255 | orchestrator | 2026-03-08 01:00:51 | INFO  | Task e2a04a5f-900a-439c-886a-67b04fff7b15 is in state STARTED 2026-03-08 01:00:51.631346 | orchestrator | 2026-03-08 01:00:51 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:00:51.633709 | orchestrator | 2026-03-08 01:00:51 | INFO  | Task 481c1a3a-0de1-45bb-a274-19d73dea546a is in state STARTED 2026-03-08 01:00:51.634587 | orchestrator | 2026-03-08 01:00:51 | INFO  | Task 167108bf-ec9e-4c1f-9d9e-52193cfc91c5 is in state STARTED 2026-03-08 01:00:51.634622 | orchestrator | 2026-03-08 01:00:51 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:00:54.668376 | orchestrator | 2026-03-08 01:00:54 | INFO  | Task f301f092-9dd4-463c-a4f6-38196b7efbf3 is in state STARTED 2026-03-08 01:00:54.668635 | orchestrator | 2026-03-08 01:00:54 | INFO  | Task e2a04a5f-900a-439c-886a-67b04fff7b15 is in state STARTED 2026-03-08 01:00:54.669466 | orchestrator | 2026-03-08 01:00:54 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:00:54.670189 | orchestrator | 2026-03-08 01:00:54 | INFO  | Task 481c1a3a-0de1-45bb-a274-19d73dea546a is in state STARTED 2026-03-08 01:00:54.670768 | orchestrator | 2026-03-08 01:00:54 | INFO  | Task 167108bf-ec9e-4c1f-9d9e-52193cfc91c5 is in state STARTED 2026-03-08 01:00:54.670950 | orchestrator | 2026-03-08 01:00:54 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:00:57.704829 | orchestrator | 2026-03-08 01:00:57 | INFO  | Task f301f092-9dd4-463c-a4f6-38196b7efbf3 is in state STARTED 2026-03-08 01:00:57.704917 | orchestrator | 2026-03-08 01:00:57 | INFO  | Task e2a04a5f-900a-439c-886a-67b04fff7b15 is in state STARTED 2026-03-08 01:00:57.704924 | orchestrator | 2026-03-08 01:00:57 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:00:57.704930 | orchestrator | 2026-03-08 01:00:57 | INFO  | Task 481c1a3a-0de1-45bb-a274-19d73dea546a is in state STARTED 2026-03-08 01:00:57.705609 | orchestrator | 2026-03-08 01:00:57 | INFO  | Task 167108bf-ec9e-4c1f-9d9e-52193cfc91c5 is in state STARTED 2026-03-08 01:00:57.705640 | orchestrator | 2026-03-08 01:00:57 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:01:00.730165 | orchestrator | 2026-03-08 01:01:00 | INFO  | Task f301f092-9dd4-463c-a4f6-38196b7efbf3 is in state STARTED 2026-03-08 01:01:00.731585 | orchestrator | 2026-03-08 01:01:00 | INFO  | Task e2a04a5f-900a-439c-886a-67b04fff7b15 is in state STARTED 2026-03-08 01:01:00.731930 | orchestrator | 2026-03-08 01:01:00 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:01:00.732871 | orchestrator | 2026-03-08 01:01:00 | INFO  | Task 481c1a3a-0de1-45bb-a274-19d73dea546a is in state STARTED 2026-03-08 01:01:00.733279 | orchestrator | 2026-03-08 01:01:00 | INFO  | Task 167108bf-ec9e-4c1f-9d9e-52193cfc91c5 is in state STARTED 2026-03-08 01:01:00.733300 | orchestrator | 2026-03-08 01:01:00 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:01:03.770386 | orchestrator | 2026-03-08 01:01:03 | INFO  | Task f301f092-9dd4-463c-a4f6-38196b7efbf3 is in state STARTED 2026-03-08 01:01:03.770530 | orchestrator | 2026-03-08 01:01:03 | INFO  | Task e2a04a5f-900a-439c-886a-67b04fff7b15 is in state STARTED 2026-03-08 01:01:03.772155 | orchestrator | 2026-03-08 01:01:03 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:01:03.772834 | orchestrator | 2026-03-08 01:01:03 | INFO  | Task 481c1a3a-0de1-45bb-a274-19d73dea546a is in state STARTED 2026-03-08 01:01:03.773663 | orchestrator | 2026-03-08 01:01:03 | INFO  | Task 167108bf-ec9e-4c1f-9d9e-52193cfc91c5 is in state STARTED 2026-03-08 01:01:03.773697 | orchestrator | 2026-03-08 01:01:03 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:01:06.803941 | orchestrator | 2026-03-08 01:01:06 | INFO  | Task f301f092-9dd4-463c-a4f6-38196b7efbf3 is in state STARTED 2026-03-08 01:01:06.804377 | orchestrator | 2026-03-08 01:01:06 | INFO  | Task e2a04a5f-900a-439c-886a-67b04fff7b15 is in state STARTED 2026-03-08 01:01:06.804982 | orchestrator | 2026-03-08 01:01:06 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:01:06.806761 | orchestrator | 2026-03-08 01:01:06 | INFO  | Task 481c1a3a-0de1-45bb-a274-19d73dea546a is in state STARTED 2026-03-08 01:01:06.807254 | orchestrator | 2026-03-08 01:01:06 | INFO  | Task 167108bf-ec9e-4c1f-9d9e-52193cfc91c5 is in state STARTED 2026-03-08 01:01:06.807393 | orchestrator | 2026-03-08 01:01:06 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:01:09.833706 | orchestrator | 2026-03-08 01:01:09 | INFO  | Task f301f092-9dd4-463c-a4f6-38196b7efbf3 is in state STARTED 2026-03-08 01:01:09.834495 | orchestrator | 2026-03-08 01:01:09 | INFO  | Task e2a04a5f-900a-439c-886a-67b04fff7b15 is in state STARTED 2026-03-08 01:01:09.834865 | orchestrator | 2026-03-08 01:01:09 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:01:09.835714 | orchestrator | 2026-03-08 01:01:09 | INFO  | Task 481c1a3a-0de1-45bb-a274-19d73dea546a is in state STARTED 2026-03-08 01:01:09.836796 | orchestrator | 2026-03-08 01:01:09 | INFO  | Task 167108bf-ec9e-4c1f-9d9e-52193cfc91c5 is in state STARTED 2026-03-08 01:01:09.836838 | orchestrator | 2026-03-08 01:01:09 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:01:12.877127 | orchestrator | 2026-03-08 01:01:12 | INFO  | Task f301f092-9dd4-463c-a4f6-38196b7efbf3 is in state STARTED 2026-03-08 01:01:12.877287 | orchestrator | 2026-03-08 01:01:12 | INFO  | Task e2a04a5f-900a-439c-886a-67b04fff7b15 is in state STARTED 2026-03-08 01:01:12.883594 | orchestrator | 2026-03-08 01:01:12 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:01:12.883893 | orchestrator | 2026-03-08 01:01:12 | INFO  | Task 481c1a3a-0de1-45bb-a274-19d73dea546a is in state STARTED 2026-03-08 01:01:12.887543 | orchestrator | 2026-03-08 01:01:12 | INFO  | Task 167108bf-ec9e-4c1f-9d9e-52193cfc91c5 is in state STARTED 2026-03-08 01:01:12.887611 | orchestrator | 2026-03-08 01:01:12 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:01:15.915967 | orchestrator | 2026-03-08 01:01:15 | INFO  | Task f301f092-9dd4-463c-a4f6-38196b7efbf3 is in state STARTED 2026-03-08 01:01:15.916107 | orchestrator | 2026-03-08 01:01:15 | INFO  | Task e2a04a5f-900a-439c-886a-67b04fff7b15 is in state STARTED 2026-03-08 01:01:15.916745 | orchestrator | 2026-03-08 01:01:15 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:01:15.917723 | orchestrator | 2026-03-08 01:01:15 | INFO  | Task 481c1a3a-0de1-45bb-a274-19d73dea546a is in state STARTED 2026-03-08 01:01:15.918509 | orchestrator | 2026-03-08 01:01:15 | INFO  | Task 167108bf-ec9e-4c1f-9d9e-52193cfc91c5 is in state STARTED 2026-03-08 01:01:15.918573 | orchestrator | 2026-03-08 01:01:15 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:01:18.947019 | orchestrator | 2026-03-08 01:01:18 | INFO  | Task f301f092-9dd4-463c-a4f6-38196b7efbf3 is in state STARTED 2026-03-08 01:01:18.948422 | orchestrator | 2026-03-08 01:01:18 | INFO  | Task e2a04a5f-900a-439c-886a-67b04fff7b15 is in state STARTED 2026-03-08 01:01:18.949965 | orchestrator | 2026-03-08 01:01:18 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:01:18.951241 | orchestrator | 2026-03-08 01:01:18 | INFO  | Task 481c1a3a-0de1-45bb-a274-19d73dea546a is in state STARTED 2026-03-08 01:01:18.953079 | orchestrator | 2026-03-08 01:01:18 | INFO  | Task 167108bf-ec9e-4c1f-9d9e-52193cfc91c5 is in state STARTED 2026-03-08 01:01:18.953113 | orchestrator | 2026-03-08 01:01:18 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:01:21.980328 | orchestrator | 2026-03-08 01:01:21 | INFO  | Task f301f092-9dd4-463c-a4f6-38196b7efbf3 is in state STARTED 2026-03-08 01:01:21.980720 | orchestrator | 2026-03-08 01:01:21 | INFO  | Task e2a04a5f-900a-439c-886a-67b04fff7b15 is in state STARTED 2026-03-08 01:01:21.981621 | orchestrator | 2026-03-08 01:01:21 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:01:21.982647 | orchestrator | 2026-03-08 01:01:21 | INFO  | Task 481c1a3a-0de1-45bb-a274-19d73dea546a is in state STARTED 2026-03-08 01:01:21.983768 | orchestrator | 2026-03-08 01:01:21 | INFO  | Task 167108bf-ec9e-4c1f-9d9e-52193cfc91c5 is in state STARTED 2026-03-08 01:01:21.983816 | orchestrator | 2026-03-08 01:01:21 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:01:25.016939 | orchestrator | 2026-03-08 01:01:25 | INFO  | Task f301f092-9dd4-463c-a4f6-38196b7efbf3 is in state STARTED 2026-03-08 01:01:25.021776 | orchestrator | 2026-03-08 01:01:25 | INFO  | Task e2a04a5f-900a-439c-886a-67b04fff7b15 is in state STARTED 2026-03-08 01:01:25.023522 | orchestrator | 2026-03-08 01:01:25 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:01:25.027106 | orchestrator | 2026-03-08 01:01:25 | INFO  | Task 481c1a3a-0de1-45bb-a274-19d73dea546a is in state STARTED 2026-03-08 01:01:25.027223 | orchestrator | 2026-03-08 01:01:25 | INFO  | Task 167108bf-ec9e-4c1f-9d9e-52193cfc91c5 is in state STARTED 2026-03-08 01:01:25.027234 | orchestrator | 2026-03-08 01:01:25 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:01:28.053595 | orchestrator | 2026-03-08 01:01:28 | INFO  | Task f301f092-9dd4-463c-a4f6-38196b7efbf3 is in state STARTED 2026-03-08 01:01:28.053791 | orchestrator | 2026-03-08 01:01:28 | INFO  | Task e2a04a5f-900a-439c-886a-67b04fff7b15 is in state STARTED 2026-03-08 01:01:28.054609 | orchestrator | 2026-03-08 01:01:28 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:01:28.056038 | orchestrator | 2026-03-08 01:01:28 | INFO  | Task 481c1a3a-0de1-45bb-a274-19d73dea546a is in state STARTED 2026-03-08 01:01:28.056081 | orchestrator | 2026-03-08 01:01:28 | INFO  | Task 167108bf-ec9e-4c1f-9d9e-52193cfc91c5 is in state STARTED 2026-03-08 01:01:28.056090 | orchestrator | 2026-03-08 01:01:28 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:01:31.112053 | orchestrator | 2026-03-08 01:01:31 | INFO  | Task f301f092-9dd4-463c-a4f6-38196b7efbf3 is in state STARTED 2026-03-08 01:01:31.112838 | orchestrator | 2026-03-08 01:01:31 | INFO  | Task e2a04a5f-900a-439c-886a-67b04fff7b15 is in state STARTED 2026-03-08 01:01:31.114910 | orchestrator | 2026-03-08 01:01:31 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:01:31.115897 | orchestrator | 2026-03-08 01:01:31 | INFO  | Task 481c1a3a-0de1-45bb-a274-19d73dea546a is in state STARTED 2026-03-08 01:01:31.117077 | orchestrator | 2026-03-08 01:01:31 | INFO  | Task 167108bf-ec9e-4c1f-9d9e-52193cfc91c5 is in state SUCCESS 2026-03-08 01:01:31.117339 | orchestrator | 2026-03-08 01:01:31.117356 | orchestrator | 2026-03-08 01:01:31.117361 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-03-08 01:01:31.117366 | orchestrator | 2026-03-08 01:01:31.117370 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-03-08 01:01:31.117374 | orchestrator | Sunday 08 March 2026 00:58:52 +0000 (0:00:00.229) 0:00:00.230 ********** 2026-03-08 01:01:31.117400 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-03-08 01:01:31.117406 | orchestrator | 2026-03-08 01:01:31.117409 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-03-08 01:01:31.117414 | orchestrator | Sunday 08 March 2026 00:58:52 +0000 (0:00:00.253) 0:00:00.483 ********** 2026-03-08 01:01:31.117418 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-03-08 01:01:31.117422 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-03-08 01:01:31.117427 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-03-08 01:01:31.117431 | orchestrator | 2026-03-08 01:01:31.117435 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-03-08 01:01:31.117439 | orchestrator | Sunday 08 March 2026 00:58:53 +0000 (0:00:01.247) 0:00:01.731 ********** 2026-03-08 01:01:31.117443 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-03-08 01:01:31.117447 | orchestrator | 2026-03-08 01:01:31.117451 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-03-08 01:01:31.117455 | orchestrator | Sunday 08 March 2026 00:58:55 +0000 (0:00:01.461) 0:00:03.192 ********** 2026-03-08 01:01:31.117458 | orchestrator | changed: [testbed-manager] 2026-03-08 01:01:31.117464 | orchestrator | 2026-03-08 01:01:31.117471 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-03-08 01:01:31.117477 | orchestrator | Sunday 08 March 2026 00:58:56 +0000 (0:00:00.991) 0:00:04.183 ********** 2026-03-08 01:01:31.117483 | orchestrator | changed: [testbed-manager] 2026-03-08 01:01:31.117489 | orchestrator | 2026-03-08 01:01:31.117495 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-03-08 01:01:31.117501 | orchestrator | Sunday 08 March 2026 00:58:57 +0000 (0:00:00.934) 0:00:05.118 ********** 2026-03-08 01:01:31.117508 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-03-08 01:01:31.117512 | orchestrator | ok: [testbed-manager] 2026-03-08 01:01:31.117519 | orchestrator | 2026-03-08 01:01:31.117524 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-03-08 01:01:31.117530 | orchestrator | Sunday 08 March 2026 00:59:40 +0000 (0:00:42.958) 0:00:48.077 ********** 2026-03-08 01:01:31.117537 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-03-08 01:01:31.117546 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-03-08 01:01:31.117552 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-03-08 01:01:31.117558 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-03-08 01:01:31.117564 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-03-08 01:01:31.117570 | orchestrator | 2026-03-08 01:01:31.117575 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-03-08 01:01:31.117581 | orchestrator | Sunday 08 March 2026 00:59:44 +0000 (0:00:04.391) 0:00:52.468 ********** 2026-03-08 01:01:31.117587 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-03-08 01:01:31.117594 | orchestrator | 2026-03-08 01:01:31.117599 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-03-08 01:01:31.117605 | orchestrator | Sunday 08 March 2026 00:59:44 +0000 (0:00:00.374) 0:00:52.842 ********** 2026-03-08 01:01:31.117634 | orchestrator | skipping: [testbed-manager] 2026-03-08 01:01:31.117641 | orchestrator | 2026-03-08 01:01:31.117648 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-03-08 01:01:31.117654 | orchestrator | Sunday 08 March 2026 00:59:44 +0000 (0:00:00.105) 0:00:52.947 ********** 2026-03-08 01:01:31.117661 | orchestrator | skipping: [testbed-manager] 2026-03-08 01:01:31.117667 | orchestrator | 2026-03-08 01:01:31.117671 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-03-08 01:01:31.117675 | orchestrator | Sunday 08 March 2026 00:59:45 +0000 (0:00:00.431) 0:00:53.379 ********** 2026-03-08 01:01:31.117691 | orchestrator | changed: [testbed-manager] 2026-03-08 01:01:31.117694 | orchestrator | 2026-03-08 01:01:31.117698 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-03-08 01:01:31.117702 | orchestrator | Sunday 08 March 2026 00:59:46 +0000 (0:00:01.263) 0:00:54.642 ********** 2026-03-08 01:01:31.117706 | orchestrator | changed: [testbed-manager] 2026-03-08 01:01:31.117710 | orchestrator | 2026-03-08 01:01:31.117713 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-03-08 01:01:31.117717 | orchestrator | Sunday 08 March 2026 00:59:47 +0000 (0:00:00.641) 0:00:55.284 ********** 2026-03-08 01:01:31.117721 | orchestrator | changed: [testbed-manager] 2026-03-08 01:01:31.117725 | orchestrator | 2026-03-08 01:01:31.117729 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-03-08 01:01:31.117732 | orchestrator | Sunday 08 March 2026 00:59:47 +0000 (0:00:00.481) 0:00:55.766 ********** 2026-03-08 01:01:31.117736 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-03-08 01:01:31.117740 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-03-08 01:01:31.117744 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-03-08 01:01:31.117747 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-03-08 01:01:31.117751 | orchestrator | 2026-03-08 01:01:31.117755 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 01:01:31.117759 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-08 01:01:31.117764 | orchestrator | 2026-03-08 01:01:31.117767 | orchestrator | 2026-03-08 01:01:31.117780 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 01:01:31.117784 | orchestrator | Sunday 08 March 2026 00:59:49 +0000 (0:00:01.340) 0:00:57.107 ********** 2026-03-08 01:01:31.117788 | orchestrator | =============================================================================== 2026-03-08 01:01:31.117791 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 42.96s 2026-03-08 01:01:31.117795 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.39s 2026-03-08 01:01:31.117799 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.46s 2026-03-08 01:01:31.117802 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.34s 2026-03-08 01:01:31.117807 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.26s 2026-03-08 01:01:31.117810 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.25s 2026-03-08 01:01:31.117814 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.99s 2026-03-08 01:01:31.117818 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.93s 2026-03-08 01:01:31.117822 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.64s 2026-03-08 01:01:31.117825 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.48s 2026-03-08 01:01:31.117829 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.43s 2026-03-08 01:01:31.117833 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.37s 2026-03-08 01:01:31.117836 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.25s 2026-03-08 01:01:31.117840 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.11s 2026-03-08 01:01:31.117844 | orchestrator | 2026-03-08 01:01:31.117848 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-08 01:01:31.117851 | orchestrator | 2.16.14 2026-03-08 01:01:31.117855 | orchestrator | 2026-03-08 01:01:31.117859 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-03-08 01:01:31.117863 | orchestrator | 2026-03-08 01:01:31.117866 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-03-08 01:01:31.117870 | orchestrator | Sunday 08 March 2026 00:59:52 +0000 (0:00:00.201) 0:00:00.201 ********** 2026-03-08 01:01:31.117877 | orchestrator | changed: [testbed-manager] 2026-03-08 01:01:31.117881 | orchestrator | 2026-03-08 01:01:31.117885 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-03-08 01:01:31.117888 | orchestrator | Sunday 08 March 2026 00:59:53 +0000 (0:00:01.407) 0:00:01.609 ********** 2026-03-08 01:01:31.117892 | orchestrator | changed: [testbed-manager] 2026-03-08 01:01:31.117896 | orchestrator | 2026-03-08 01:01:31.117900 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-03-08 01:01:31.117903 | orchestrator | Sunday 08 March 2026 00:59:54 +0000 (0:00:00.922) 0:00:02.531 ********** 2026-03-08 01:01:31.117907 | orchestrator | changed: [testbed-manager] 2026-03-08 01:01:31.117911 | orchestrator | 2026-03-08 01:01:31.117915 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-03-08 01:01:31.117918 | orchestrator | Sunday 08 March 2026 00:59:55 +0000 (0:00:00.913) 0:00:03.444 ********** 2026-03-08 01:01:31.117922 | orchestrator | changed: [testbed-manager] 2026-03-08 01:01:31.117926 | orchestrator | 2026-03-08 01:01:31.117930 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-03-08 01:01:31.117933 | orchestrator | Sunday 08 March 2026 00:59:56 +0000 (0:00:01.025) 0:00:04.470 ********** 2026-03-08 01:01:31.117937 | orchestrator | changed: [testbed-manager] 2026-03-08 01:01:31.117941 | orchestrator | 2026-03-08 01:01:31.117944 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-03-08 01:01:31.117951 | orchestrator | Sunday 08 March 2026 00:59:57 +0000 (0:00:01.022) 0:00:05.493 ********** 2026-03-08 01:01:31.117955 | orchestrator | changed: [testbed-manager] 2026-03-08 01:01:31.117959 | orchestrator | 2026-03-08 01:01:31.117962 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-03-08 01:01:31.117966 | orchestrator | Sunday 08 March 2026 00:59:58 +0000 (0:00:00.956) 0:00:06.449 ********** 2026-03-08 01:01:31.117970 | orchestrator | changed: [testbed-manager] 2026-03-08 01:01:31.117975 | orchestrator | 2026-03-08 01:01:31.117979 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-03-08 01:01:31.117984 | orchestrator | Sunday 08 March 2026 00:59:59 +0000 (0:00:01.138) 0:00:07.588 ********** 2026-03-08 01:01:31.117988 | orchestrator | changed: [testbed-manager] 2026-03-08 01:01:31.117992 | orchestrator | 2026-03-08 01:01:31.117997 | orchestrator | TASK [Create admin user] ******************************************************* 2026-03-08 01:01:31.118002 | orchestrator | Sunday 08 March 2026 01:00:00 +0000 (0:00:01.111) 0:00:08.700 ********** 2026-03-08 01:01:31.118006 | orchestrator | changed: [testbed-manager] 2026-03-08 01:01:31.118010 | orchestrator | 2026-03-08 01:01:31.118048 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-03-08 01:01:31.118053 | orchestrator | Sunday 08 March 2026 01:01:04 +0000 (0:01:03.870) 0:01:12.570 ********** 2026-03-08 01:01:31.118057 | orchestrator | skipping: [testbed-manager] 2026-03-08 01:01:31.118062 | orchestrator | 2026-03-08 01:01:31.118066 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-08 01:01:31.118071 | orchestrator | 2026-03-08 01:01:31.118075 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-08 01:01:31.118079 | orchestrator | Sunday 08 March 2026 01:01:05 +0000 (0:00:00.218) 0:01:12.788 ********** 2026-03-08 01:01:31.118084 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:01:31.118089 | orchestrator | 2026-03-08 01:01:31.118093 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-08 01:01:31.118098 | orchestrator | 2026-03-08 01:01:31.118102 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-08 01:01:31.118106 | orchestrator | Sunday 08 March 2026 01:01:17 +0000 (0:00:12.229) 0:01:25.018 ********** 2026-03-08 01:01:31.118111 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:01:31.118115 | orchestrator | 2026-03-08 01:01:31.118124 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-08 01:01:31.118128 | orchestrator | 2026-03-08 01:01:31.118133 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-08 01:01:31.118162 | orchestrator | Sunday 08 March 2026 01:01:29 +0000 (0:00:11.780) 0:01:36.799 ********** 2026-03-08 01:01:31.118167 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:01:31.118171 | orchestrator | 2026-03-08 01:01:31.118176 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 01:01:31.118181 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-08 01:01:31.118185 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 01:01:31.118190 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 01:01:31.118194 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 01:01:31.118199 | orchestrator | 2026-03-08 01:01:31.118205 | orchestrator | 2026-03-08 01:01:31.118211 | orchestrator | 2026-03-08 01:01:31.118219 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 01:01:31.118227 | orchestrator | Sunday 08 March 2026 01:01:30 +0000 (0:00:01.143) 0:01:37.942 ********** 2026-03-08 01:01:31.118234 | orchestrator | =============================================================================== 2026-03-08 01:01:31.118240 | orchestrator | Create admin user ------------------------------------------------------ 63.87s 2026-03-08 01:01:31.118245 | orchestrator | Restart ceph manager service ------------------------------------------- 25.15s 2026-03-08 01:01:31.118252 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.41s 2026-03-08 01:01:31.118258 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.14s 2026-03-08 01:01:31.118264 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.11s 2026-03-08 01:01:31.118270 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.03s 2026-03-08 01:01:31.118276 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.02s 2026-03-08 01:01:31.118282 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 0.96s 2026-03-08 01:01:31.118287 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 0.92s 2026-03-08 01:01:31.118293 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 0.91s 2026-03-08 01:01:31.118299 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.22s 2026-03-08 01:01:31.118305 | orchestrator | 2026-03-08 01:01:31 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:01:34.153098 | orchestrator | 2026-03-08 01:01:34 | INFO  | Task f301f092-9dd4-463c-a4f6-38196b7efbf3 is in state STARTED 2026-03-08 01:01:34.155219 | orchestrator | 2026-03-08 01:01:34 | INFO  | Task e2a04a5f-900a-439c-886a-67b04fff7b15 is in state STARTED 2026-03-08 01:01:34.155956 | orchestrator | 2026-03-08 01:01:34 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:01:34.156565 | orchestrator | 2026-03-08 01:01:34 | INFO  | Task 481c1a3a-0de1-45bb-a274-19d73dea546a is in state STARTED 2026-03-08 01:01:34.156588 | orchestrator | 2026-03-08 01:01:34 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:01:37.184697 | orchestrator | 2026-03-08 01:01:37 | INFO  | Task f301f092-9dd4-463c-a4f6-38196b7efbf3 is in state STARTED 2026-03-08 01:01:37.186528 | orchestrator | 2026-03-08 01:01:37 | INFO  | Task e2a04a5f-900a-439c-886a-67b04fff7b15 is in state STARTED 2026-03-08 01:01:37.187579 | orchestrator | 2026-03-08 01:01:37 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:01:37.188172 | orchestrator | 2026-03-08 01:01:37 | INFO  | Task 481c1a3a-0de1-45bb-a274-19d73dea546a is in state STARTED 2026-03-08 01:01:37.188299 | orchestrator | 2026-03-08 01:01:37 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:01:40.267758 | orchestrator | 2026-03-08 01:01:40 | INFO  | Task f301f092-9dd4-463c-a4f6-38196b7efbf3 is in state STARTED 2026-03-08 01:01:40.268434 | orchestrator | 2026-03-08 01:01:40 | INFO  | Task e2a04a5f-900a-439c-886a-67b04fff7b15 is in state STARTED 2026-03-08 01:01:40.269943 | orchestrator | 2026-03-08 01:01:40 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:01:40.271210 | orchestrator | 2026-03-08 01:01:40 | INFO  | Task 481c1a3a-0de1-45bb-a274-19d73dea546a is in state STARTED 2026-03-08 01:01:40.271312 | orchestrator | 2026-03-08 01:01:40 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:01:43.295355 | orchestrator | 2026-03-08 01:01:43 | INFO  | Task f301f092-9dd4-463c-a4f6-38196b7efbf3 is in state STARTED 2026-03-08 01:01:43.295685 | orchestrator | 2026-03-08 01:01:43 | INFO  | Task e2a04a5f-900a-439c-886a-67b04fff7b15 is in state STARTED 2026-03-08 01:01:43.297247 | orchestrator | 2026-03-08 01:01:43 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:01:43.297955 | orchestrator | 2026-03-08 01:01:43 | INFO  | Task 481c1a3a-0de1-45bb-a274-19d73dea546a is in state STARTED 2026-03-08 01:01:43.297983 | orchestrator | 2026-03-08 01:01:43 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:01:46.332102 | orchestrator | 2026-03-08 01:01:46 | INFO  | Task f301f092-9dd4-463c-a4f6-38196b7efbf3 is in state STARTED 2026-03-08 01:01:46.332529 | orchestrator | 2026-03-08 01:01:46 | INFO  | Task e2a04a5f-900a-439c-886a-67b04fff7b15 is in state STARTED 2026-03-08 01:01:46.333645 | orchestrator | 2026-03-08 01:01:46 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:01:46.334240 | orchestrator | 2026-03-08 01:01:46 | INFO  | Task 481c1a3a-0de1-45bb-a274-19d73dea546a is in state STARTED 2026-03-08 01:01:46.334324 | orchestrator | 2026-03-08 01:01:46 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:01:49.363764 | orchestrator | 2026-03-08 01:01:49 | INFO  | Task f301f092-9dd4-463c-a4f6-38196b7efbf3 is in state STARTED 2026-03-08 01:01:49.363811 | orchestrator | 2026-03-08 01:01:49 | INFO  | Task e2a04a5f-900a-439c-886a-67b04fff7b15 is in state STARTED 2026-03-08 01:01:49.364832 | orchestrator | 2026-03-08 01:01:49 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:01:49.365622 | orchestrator | 2026-03-08 01:01:49 | INFO  | Task 481c1a3a-0de1-45bb-a274-19d73dea546a is in state STARTED 2026-03-08 01:01:49.365654 | orchestrator | 2026-03-08 01:01:49 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:01:52.392524 | orchestrator | 2026-03-08 01:01:52 | INFO  | Task f301f092-9dd4-463c-a4f6-38196b7efbf3 is in state STARTED 2026-03-08 01:01:52.392739 | orchestrator | 2026-03-08 01:01:52 | INFO  | Task e2a04a5f-900a-439c-886a-67b04fff7b15 is in state STARTED 2026-03-08 01:01:52.393432 | orchestrator | 2026-03-08 01:01:52 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:01:52.394009 | orchestrator | 2026-03-08 01:01:52 | INFO  | Task 481c1a3a-0de1-45bb-a274-19d73dea546a is in state STARTED 2026-03-08 01:01:52.394064 | orchestrator | 2026-03-08 01:01:52 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:01:55.416986 | orchestrator | 2026-03-08 01:01:55 | INFO  | Task f301f092-9dd4-463c-a4f6-38196b7efbf3 is in state STARTED 2026-03-08 01:01:55.417595 | orchestrator | 2026-03-08 01:01:55 | INFO  | Task e2a04a5f-900a-439c-886a-67b04fff7b15 is in state STARTED 2026-03-08 01:01:55.418494 | orchestrator | 2026-03-08 01:01:55 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:01:55.419629 | orchestrator | 2026-03-08 01:01:55 | INFO  | Task 481c1a3a-0de1-45bb-a274-19d73dea546a is in state STARTED 2026-03-08 01:01:55.419683 | orchestrator | 2026-03-08 01:01:55 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:01:58.442597 | orchestrator | 2026-03-08 01:01:58 | INFO  | Task f301f092-9dd4-463c-a4f6-38196b7efbf3 is in state SUCCESS 2026-03-08 01:01:58.444161 | orchestrator | 2026-03-08 01:01:58.444218 | orchestrator | 2026-03-08 01:01:58.444232 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-08 01:01:58.444245 | orchestrator | 2026-03-08 01:01:58.444258 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-08 01:01:58.444271 | orchestrator | Sunday 08 March 2026 00:59:43 +0000 (0:00:00.415) 0:00:00.415 ********** 2026-03-08 01:01:58.444283 | orchestrator | ok: [testbed-node-0] 2026-03-08 01:01:58.444296 | orchestrator | ok: [testbed-node-1] 2026-03-08 01:01:58.444308 | orchestrator | ok: [testbed-node-2] 2026-03-08 01:01:58.444321 | orchestrator | 2026-03-08 01:01:58.444334 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-08 01:01:58.444347 | orchestrator | Sunday 08 March 2026 00:59:43 +0000 (0:00:00.442) 0:00:00.857 ********** 2026-03-08 01:01:58.444360 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-03-08 01:01:58.444373 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-03-08 01:01:58.444386 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-03-08 01:01:58.444399 | orchestrator | 2026-03-08 01:01:58.444412 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-03-08 01:01:58.444426 | orchestrator | 2026-03-08 01:01:58.444439 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-08 01:01:58.444451 | orchestrator | Sunday 08 March 2026 00:59:44 +0000 (0:00:00.642) 0:00:01.500 ********** 2026-03-08 01:01:58.444464 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 01:01:58.444478 | orchestrator | 2026-03-08 01:01:58.444490 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2026-03-08 01:01:58.444502 | orchestrator | Sunday 08 March 2026 00:59:44 +0000 (0:00:00.593) 0:00:02.093 ********** 2026-03-08 01:01:58.444514 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-03-08 01:01:58.444527 | orchestrator | 2026-03-08 01:01:58.444539 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2026-03-08 01:01:58.444553 | orchestrator | Sunday 08 March 2026 00:59:48 +0000 (0:00:04.100) 0:00:06.193 ********** 2026-03-08 01:01:58.444566 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-03-08 01:01:58.444578 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-03-08 01:01:58.444591 | orchestrator | 2026-03-08 01:01:58.444604 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-03-08 01:01:58.444616 | orchestrator | Sunday 08 March 2026 00:59:56 +0000 (0:00:07.676) 0:00:13.871 ********** 2026-03-08 01:01:58.444630 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-08 01:01:58.444758 | orchestrator | 2026-03-08 01:01:58.444773 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-03-08 01:01:58.444786 | orchestrator | Sunday 08 March 2026 01:00:00 +0000 (0:00:03.647) 0:00:17.518 ********** 2026-03-08 01:01:58.444811 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-03-08 01:01:58.444826 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-08 01:01:58.444839 | orchestrator | 2026-03-08 01:01:58.444853 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-03-08 01:01:58.444866 | orchestrator | Sunday 08 March 2026 01:00:04 +0000 (0:00:04.568) 0:00:22.086 ********** 2026-03-08 01:01:58.444904 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-08 01:01:58.444918 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-03-08 01:01:58.444932 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-03-08 01:01:58.444946 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-03-08 01:01:58.444959 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-03-08 01:01:58.444972 | orchestrator | 2026-03-08 01:01:58.444986 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2026-03-08 01:01:58.444999 | orchestrator | Sunday 08 March 2026 01:00:23 +0000 (0:00:18.561) 0:00:40.647 ********** 2026-03-08 01:01:58.445012 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-03-08 01:01:58.445025 | orchestrator | 2026-03-08 01:01:58.445037 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-03-08 01:01:58.445051 | orchestrator | Sunday 08 March 2026 01:00:27 +0000 (0:00:04.160) 0:00:44.808 ********** 2026-03-08 01:01:58.445106 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-08 01:01:58.445144 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-08 01:01:58.445160 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-08 01:01:58.445174 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-08 01:01:58.445197 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-08 01:01:58.445211 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-08 01:01:58.445238 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:01:58.445252 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:01:58.445266 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:01:58.445279 | orchestrator | 2026-03-08 01:01:58.445292 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-03-08 01:01:58.445304 | orchestrator | Sunday 08 March 2026 01:00:29 +0000 (0:00:02.372) 0:00:47.180 ********** 2026-03-08 01:01:58.445317 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-03-08 01:01:58.445331 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-03-08 01:01:58.445353 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-03-08 01:01:58.445366 | orchestrator | 2026-03-08 01:01:58.445380 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-03-08 01:01:58.445393 | orchestrator | Sunday 08 March 2026 01:00:30 +0000 (0:00:00.964) 0:00:48.145 ********** 2026-03-08 01:01:58.445407 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:01:58.445421 | orchestrator | 2026-03-08 01:01:58.445435 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-03-08 01:01:58.445450 | orchestrator | Sunday 08 March 2026 01:00:30 +0000 (0:00:00.122) 0:00:48.267 ********** 2026-03-08 01:01:58.445464 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:01:58.445475 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:01:58.445489 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:01:58.445503 | orchestrator | 2026-03-08 01:01:58.445517 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-08 01:01:58.445531 | orchestrator | Sunday 08 March 2026 01:00:31 +0000 (0:00:00.448) 0:00:48.715 ********** 2026-03-08 01:01:58.445546 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 01:01:58.445561 | orchestrator | 2026-03-08 01:01:58.445576 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-03-08 01:01:58.445590 | orchestrator | Sunday 08 March 2026 01:00:31 +0000 (0:00:00.517) 0:00:49.233 ********** 2026-03-08 01:01:58.445604 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-08 01:01:58.445632 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-08 01:01:58.445648 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-08 01:01:58.445670 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-08 01:01:58.445685 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-08 01:01:58.445699 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-08 01:01:58.445713 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:01:58.445739 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:01:58.445752 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:01:58.445777 | orchestrator | 2026-03-08 01:01:58.445789 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-03-08 01:01:58.445801 | orchestrator | Sunday 08 March 2026 01:00:35 +0000 (0:00:03.827) 0:00:53.061 ********** 2026-03-08 01:01:58.445814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-08 01:01:58.445826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-08 01:01:58.445838 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-08 01:01:58.445851 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:01:58.445877 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-08 01:01:58.445890 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-08 01:01:58.445917 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-08 01:01:58.445931 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:01:58.445945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-08 01:01:58.445959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-08 01:01:58.445973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-08 01:01:58.445986 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:01:58.446000 | orchestrator | 2026-03-08 01:01:58.446068 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-03-08 01:01:58.446106 | orchestrator | Sunday 08 March 2026 01:00:36 +0000 (0:00:00.654) 0:00:53.716 ********** 2026-03-08 01:01:58.446139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-08 01:01:58.446155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-08 01:01:58.446163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-08 01:01:58.446171 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:01:58.446179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-08 01:01:58.446188 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-08 01:01:58.446199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-08 01:01:58.446208 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:01:58.446221 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-08 01:01:58.446235 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-08 01:01:58.446243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-08 01:01:58.446251 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:01:58.446259 | orchestrator | 2026-03-08 01:01:58.446265 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-03-08 01:01:58.446272 | orchestrator | Sunday 08 March 2026 01:00:38 +0000 (0:00:01.779) 0:00:55.495 ********** 2026-03-08 01:01:58.446279 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-08 01:01:58.446850 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-08 01:01:58.446966 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-08 01:01:58.446995 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-08 01:01:58.447018 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-08 01:01:58.447032 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-08 01:01:58.447043 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:01:58.447120 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:01:58.447161 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:01:58.447183 | orchestrator | 2026-03-08 01:01:58.447205 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-03-08 01:01:58.447227 | orchestrator | Sunday 08 March 2026 01:00:42 +0000 (0:00:04.523) 0:01:00.019 ********** 2026-03-08 01:01:58.447247 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:01:58.447260 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:01:58.447271 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:01:58.447282 | orchestrator | 2026-03-08 01:01:58.447293 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-03-08 01:01:58.447304 | orchestrator | Sunday 08 March 2026 01:00:45 +0000 (0:00:02.402) 0:01:02.421 ********** 2026-03-08 01:01:58.447315 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-08 01:01:58.447326 | orchestrator | 2026-03-08 01:01:58.447337 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-03-08 01:01:58.447348 | orchestrator | Sunday 08 March 2026 01:00:46 +0000 (0:00:01.791) 0:01:04.213 ********** 2026-03-08 01:01:58.447361 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:01:58.447374 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:01:58.447388 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:01:58.447400 | orchestrator | 2026-03-08 01:01:58.447414 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-03-08 01:01:58.447427 | orchestrator | Sunday 08 March 2026 01:00:47 +0000 (0:00:00.548) 0:01:04.761 ********** 2026-03-08 01:01:58.447441 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-08 01:01:58.447455 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-08 01:01:58.447487 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-08 01:01:58.447532 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-08 01:01:58.447549 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-08 01:01:58.447563 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-08 01:01:58.447576 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:01:58.447590 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:01:58.447612 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:01:58.447625 | orchestrator | 2026-03-08 01:01:58.447637 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-03-08 01:01:58.447648 | orchestrator | Sunday 08 March 2026 01:00:57 +0000 (0:00:09.657) 0:01:14.419 ********** 2026-03-08 01:01:58.447667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-08 01:01:58.447680 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-08 01:01:58.447692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-08 01:01:58.447703 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:01:58.447715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-08 01:01:58.447733 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-08 01:01:58.447756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-08 01:01:58.447769 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:01:58.447780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-08 01:01:58.447792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-08 01:01:58.447805 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-08 01:01:58.447823 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:01:58.447835 | orchestrator | 2026-03-08 01:01:58.447846 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2026-03-08 01:01:58.447857 | orchestrator | Sunday 08 March 2026 01:00:58 +0000 (0:00:01.287) 0:01:15.707 ********** 2026-03-08 01:01:58.447869 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-08 01:01:58.447892 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-08 01:01:58.447905 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-08 01:01:58.447918 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-08 01:01:58.447929 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-08 01:01:58.447946 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-08 01:01:58.447958 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:01:58.447981 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:01:58.447994 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:01:58.448006 | orchestrator | 2026-03-08 01:01:58.448017 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-08 01:01:58.448029 | orchestrator | Sunday 08 March 2026 01:01:02 +0000 (0:00:04.377) 0:01:20.084 ********** 2026-03-08 01:01:58.448040 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:01:58.448051 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:01:58.448062 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:01:58.448073 | orchestrator | 2026-03-08 01:01:58.448195 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-03-08 01:01:58.448208 | orchestrator | Sunday 08 March 2026 01:01:03 +0000 (0:00:00.698) 0:01:20.782 ********** 2026-03-08 01:01:58.448220 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:01:58.448231 | orchestrator | 2026-03-08 01:01:58.448242 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-03-08 01:01:58.448254 | orchestrator | Sunday 08 March 2026 01:01:05 +0000 (0:00:02.480) 0:01:23.266 ********** 2026-03-08 01:01:58.448265 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:01:58.448276 | orchestrator | 2026-03-08 01:01:58.448301 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-03-08 01:01:58.448313 | orchestrator | Sunday 08 March 2026 01:01:08 +0000 (0:00:02.834) 0:01:26.101 ********** 2026-03-08 01:01:58.448324 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:01:58.448336 | orchestrator | 2026-03-08 01:01:58.448347 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-08 01:01:58.448358 | orchestrator | Sunday 08 March 2026 01:01:21 +0000 (0:00:12.746) 0:01:38.847 ********** 2026-03-08 01:01:58.448369 | orchestrator | 2026-03-08 01:01:58.448380 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-08 01:01:58.448391 | orchestrator | Sunday 08 March 2026 01:01:21 +0000 (0:00:00.182) 0:01:39.030 ********** 2026-03-08 01:01:58.448402 | orchestrator | 2026-03-08 01:01:58.448413 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-08 01:01:58.448424 | orchestrator | Sunday 08 March 2026 01:01:21 +0000 (0:00:00.142) 0:01:39.173 ********** 2026-03-08 01:01:58.448435 | orchestrator | 2026-03-08 01:01:58.448446 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-03-08 01:01:58.448456 | orchestrator | Sunday 08 March 2026 01:01:22 +0000 (0:00:00.173) 0:01:39.346 ********** 2026-03-08 01:01:58.448467 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:01:58.448478 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:01:58.448489 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:01:58.448500 | orchestrator | 2026-03-08 01:01:58.448511 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-03-08 01:01:58.448522 | orchestrator | Sunday 08 March 2026 01:01:33 +0000 (0:00:11.628) 0:01:50.974 ********** 2026-03-08 01:01:58.448534 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:01:58.448544 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:01:58.448556 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:01:58.448567 | orchestrator | 2026-03-08 01:01:58.448577 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-03-08 01:01:58.448588 | orchestrator | Sunday 08 March 2026 01:01:44 +0000 (0:00:10.469) 0:02:01.444 ********** 2026-03-08 01:01:58.448599 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:01:58.448610 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:01:58.448621 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:01:58.448631 | orchestrator | 2026-03-08 01:01:58.448642 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 01:01:58.448654 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-08 01:01:58.448666 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-08 01:01:58.448677 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-08 01:01:58.448688 | orchestrator | 2026-03-08 01:01:58.448699 | orchestrator | 2026-03-08 01:01:58.448710 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 01:01:58.448720 | orchestrator | Sunday 08 March 2026 01:01:55 +0000 (0:00:11.320) 0:02:12.765 ********** 2026-03-08 01:01:58.448735 | orchestrator | =============================================================================== 2026-03-08 01:01:58.448745 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 18.56s 2026-03-08 01:01:58.448762 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 12.75s 2026-03-08 01:01:58.448772 | orchestrator | barbican : Restart barbican-api container ------------------------------ 11.63s 2026-03-08 01:01:58.448782 | orchestrator | barbican : Restart barbican-worker container --------------------------- 11.32s 2026-03-08 01:01:58.448792 | orchestrator | barbican : Restart barbican-keystone-listener container ---------------- 10.47s 2026-03-08 01:01:58.448802 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 9.66s 2026-03-08 01:01:58.448817 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 7.68s 2026-03-08 01:01:58.448827 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.57s 2026-03-08 01:01:58.448837 | orchestrator | barbican : Copying over config.json files for services ------------------ 4.52s 2026-03-08 01:01:58.448846 | orchestrator | barbican : Check barbican containers ------------------------------------ 4.38s 2026-03-08 01:01:58.448856 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.16s 2026-03-08 01:01:58.448866 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 4.10s 2026-03-08 01:01:58.448875 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.83s 2026-03-08 01:01:58.448885 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.65s 2026-03-08 01:01:58.448895 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.83s 2026-03-08 01:01:58.448904 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.48s 2026-03-08 01:01:58.448914 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.40s 2026-03-08 01:01:58.448924 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.37s 2026-03-08 01:01:58.448934 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 1.79s 2026-03-08 01:01:58.448944 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS key ---- 1.78s 2026-03-08 01:01:58.448954 | orchestrator | 2026-03-08 01:01:58 | INFO  | Task e2a04a5f-900a-439c-886a-67b04fff7b15 is in state STARTED 2026-03-08 01:01:58.448964 | orchestrator | 2026-03-08 01:01:58 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:01:58.448973 | orchestrator | 2026-03-08 01:01:58 | INFO  | Task 481c1a3a-0de1-45bb-a274-19d73dea546a is in state STARTED 2026-03-08 01:01:58.448983 | orchestrator | 2026-03-08 01:01:58 | INFO  | Task 2b771276-80e5-4c8d-8712-8a8fc1734b79 is in state STARTED 2026-03-08 01:01:58.448993 | orchestrator | 2026-03-08 01:01:58 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:02:01.481106 | orchestrator | 2026-03-08 01:02:01 | INFO  | Task e2a04a5f-900a-439c-886a-67b04fff7b15 is in state STARTED 2026-03-08 01:02:01.482221 | orchestrator | 2026-03-08 01:02:01 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:02:01.483050 | orchestrator | 2026-03-08 01:02:01 | INFO  | Task 481c1a3a-0de1-45bb-a274-19d73dea546a is in state STARTED 2026-03-08 01:02:01.484813 | orchestrator | 2026-03-08 01:02:01 | INFO  | Task 2b771276-80e5-4c8d-8712-8a8fc1734b79 is in state STARTED 2026-03-08 01:02:01.484851 | orchestrator | 2026-03-08 01:02:01 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:02:04.517626 | orchestrator | 2026-03-08 01:02:04 | INFO  | Task e2a04a5f-900a-439c-886a-67b04fff7b15 is in state STARTED 2026-03-08 01:02:04.519354 | orchestrator | 2026-03-08 01:02:04 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:02:04.521229 | orchestrator | 2026-03-08 01:02:04 | INFO  | Task 481c1a3a-0de1-45bb-a274-19d73dea546a is in state STARTED 2026-03-08 01:02:04.522797 | orchestrator | 2026-03-08 01:02:04 | INFO  | Task 2b771276-80e5-4c8d-8712-8a8fc1734b79 is in state STARTED 2026-03-08 01:02:04.522837 | orchestrator | 2026-03-08 01:02:04 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:02:07.559969 | orchestrator | 2026-03-08 01:02:07 | INFO  | Task e2a04a5f-900a-439c-886a-67b04fff7b15 is in state STARTED 2026-03-08 01:02:07.564011 | orchestrator | 2026-03-08 01:02:07 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:02:07.565193 | orchestrator | 2026-03-08 01:02:07 | INFO  | Task 481c1a3a-0de1-45bb-a274-19d73dea546a is in state STARTED 2026-03-08 01:02:07.567271 | orchestrator | 2026-03-08 01:02:07 | INFO  | Task 2b771276-80e5-4c8d-8712-8a8fc1734b79 is in state STARTED 2026-03-08 01:02:07.567506 | orchestrator | 2026-03-08 01:02:07 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:02:10.624585 | orchestrator | 2026-03-08 01:02:10 | INFO  | Task e2a04a5f-900a-439c-886a-67b04fff7b15 is in state STARTED 2026-03-08 01:02:10.626996 | orchestrator | 2026-03-08 01:02:10 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:02:10.628647 | orchestrator | 2026-03-08 01:02:10 | INFO  | Task 481c1a3a-0de1-45bb-a274-19d73dea546a is in state STARTED 2026-03-08 01:02:10.630226 | orchestrator | 2026-03-08 01:02:10 | INFO  | Task 2b771276-80e5-4c8d-8712-8a8fc1734b79 is in state STARTED 2026-03-08 01:02:10.630362 | orchestrator | 2026-03-08 01:02:10 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:02:13.678845 | orchestrator | 2026-03-08 01:02:13 | INFO  | Task e2a04a5f-900a-439c-886a-67b04fff7b15 is in state STARTED 2026-03-08 01:02:13.681263 | orchestrator | 2026-03-08 01:02:13 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:02:13.683778 | orchestrator | 2026-03-08 01:02:13 | INFO  | Task 481c1a3a-0de1-45bb-a274-19d73dea546a is in state STARTED 2026-03-08 01:02:13.686189 | orchestrator | 2026-03-08 01:02:13 | INFO  | Task 2b771276-80e5-4c8d-8712-8a8fc1734b79 is in state STARTED 2026-03-08 01:02:13.686482 | orchestrator | 2026-03-08 01:02:13 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:02:16.722330 | orchestrator | 2026-03-08 01:02:16 | INFO  | Task e2a04a5f-900a-439c-886a-67b04fff7b15 is in state STARTED 2026-03-08 01:02:16.722534 | orchestrator | 2026-03-08 01:02:16 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:02:16.726894 | orchestrator | 2026-03-08 01:02:16 | INFO  | Task 481c1a3a-0de1-45bb-a274-19d73dea546a is in state STARTED 2026-03-08 01:02:16.728086 | orchestrator | 2026-03-08 01:02:16 | INFO  | Task 2b771276-80e5-4c8d-8712-8a8fc1734b79 is in state STARTED 2026-03-08 01:02:16.728152 | orchestrator | 2026-03-08 01:02:16 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:02:19.782992 | orchestrator | 2026-03-08 01:02:19 | INFO  | Task e2a04a5f-900a-439c-886a-67b04fff7b15 is in state STARTED 2026-03-08 01:02:19.785358 | orchestrator | 2026-03-08 01:02:19 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:02:19.787140 | orchestrator | 2026-03-08 01:02:19 | INFO  | Task 481c1a3a-0de1-45bb-a274-19d73dea546a is in state STARTED 2026-03-08 01:02:19.788780 | orchestrator | 2026-03-08 01:02:19 | INFO  | Task 2b771276-80e5-4c8d-8712-8a8fc1734b79 is in state STARTED 2026-03-08 01:02:19.788825 | orchestrator | 2026-03-08 01:02:19 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:02:22.834134 | orchestrator | 2026-03-08 01:02:22 | INFO  | Task e2a04a5f-900a-439c-886a-67b04fff7b15 is in state STARTED 2026-03-08 01:02:22.836373 | orchestrator | 2026-03-08 01:02:22 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:02:22.838961 | orchestrator | 2026-03-08 01:02:22 | INFO  | Task 481c1a3a-0de1-45bb-a274-19d73dea546a is in state STARTED 2026-03-08 01:02:22.841070 | orchestrator | 2026-03-08 01:02:22 | INFO  | Task 2b771276-80e5-4c8d-8712-8a8fc1734b79 is in state STARTED 2026-03-08 01:02:22.841129 | orchestrator | 2026-03-08 01:02:22 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:02:25.889831 | orchestrator | 2026-03-08 01:02:25 | INFO  | Task e2a04a5f-900a-439c-886a-67b04fff7b15 is in state STARTED 2026-03-08 01:02:25.892622 | orchestrator | 2026-03-08 01:02:25 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:02:25.893873 | orchestrator | 2026-03-08 01:02:25 | INFO  | Task 481c1a3a-0de1-45bb-a274-19d73dea546a is in state STARTED 2026-03-08 01:02:25.895447 | orchestrator | 2026-03-08 01:02:25 | INFO  | Task 2b771276-80e5-4c8d-8712-8a8fc1734b79 is in state STARTED 2026-03-08 01:02:25.895785 | orchestrator | 2026-03-08 01:02:25 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:02:28.937860 | orchestrator | 2026-03-08 01:02:28 | INFO  | Task e2a04a5f-900a-439c-886a-67b04fff7b15 is in state STARTED 2026-03-08 01:02:28.939669 | orchestrator | 2026-03-08 01:02:28 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:02:28.941395 | orchestrator | 2026-03-08 01:02:28 | INFO  | Task 481c1a3a-0de1-45bb-a274-19d73dea546a is in state STARTED 2026-03-08 01:02:28.944312 | orchestrator | 2026-03-08 01:02:28 | INFO  | Task 2b771276-80e5-4c8d-8712-8a8fc1734b79 is in state STARTED 2026-03-08 01:02:28.944784 | orchestrator | 2026-03-08 01:02:28 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:02:31.984758 | orchestrator | 2026-03-08 01:02:31 | INFO  | Task e2a04a5f-900a-439c-886a-67b04fff7b15 is in state STARTED 2026-03-08 01:02:31.986214 | orchestrator | 2026-03-08 01:02:31 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:02:31.987624 | orchestrator | 2026-03-08 01:02:31 | INFO  | Task 481c1a3a-0de1-45bb-a274-19d73dea546a is in state STARTED 2026-03-08 01:02:31.989218 | orchestrator | 2026-03-08 01:02:31 | INFO  | Task 2b771276-80e5-4c8d-8712-8a8fc1734b79 is in state STARTED 2026-03-08 01:02:31.989260 | orchestrator | 2026-03-08 01:02:31 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:02:35.026814 | orchestrator | 2026-03-08 01:02:35 | INFO  | Task e2a04a5f-900a-439c-886a-67b04fff7b15 is in state STARTED 2026-03-08 01:02:35.028363 | orchestrator | 2026-03-08 01:02:35 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:02:35.031578 | orchestrator | 2026-03-08 01:02:35 | INFO  | Task 481c1a3a-0de1-45bb-a274-19d73dea546a is in state STARTED 2026-03-08 01:02:35.033250 | orchestrator | 2026-03-08 01:02:35 | INFO  | Task 2b771276-80e5-4c8d-8712-8a8fc1734b79 is in state STARTED 2026-03-08 01:02:35.033529 | orchestrator | 2026-03-08 01:02:35 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:02:38.077618 | orchestrator | 2026-03-08 01:02:38 | INFO  | Task e2a04a5f-900a-439c-886a-67b04fff7b15 is in state STARTED 2026-03-08 01:02:38.078207 | orchestrator | 2026-03-08 01:02:38 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:02:38.081733 | orchestrator | 2026-03-08 01:02:38 | INFO  | Task 481c1a3a-0de1-45bb-a274-19d73dea546a is in state STARTED 2026-03-08 01:02:38.082367 | orchestrator | 2026-03-08 01:02:38 | INFO  | Task 2b771276-80e5-4c8d-8712-8a8fc1734b79 is in state STARTED 2026-03-08 01:02:38.082448 | orchestrator | 2026-03-08 01:02:38 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:02:41.126069 | orchestrator | 2026-03-08 01:02:41 | INFO  | Task e2a04a5f-900a-439c-886a-67b04fff7b15 is in state STARTED 2026-03-08 01:02:41.129284 | orchestrator | 2026-03-08 01:02:41 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:02:41.130430 | orchestrator | 2026-03-08 01:02:41 | INFO  | Task 481c1a3a-0de1-45bb-a274-19d73dea546a is in state STARTED 2026-03-08 01:02:41.132464 | orchestrator | 2026-03-08 01:02:41 | INFO  | Task 2b771276-80e5-4c8d-8712-8a8fc1734b79 is in state STARTED 2026-03-08 01:02:41.132502 | orchestrator | 2026-03-08 01:02:41 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:02:44.177205 | orchestrator | 2026-03-08 01:02:44 | INFO  | Task e2a04a5f-900a-439c-886a-67b04fff7b15 is in state STARTED 2026-03-08 01:02:44.179351 | orchestrator | 2026-03-08 01:02:44 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:02:44.182148 | orchestrator | 2026-03-08 01:02:44 | INFO  | Task 481c1a3a-0de1-45bb-a274-19d73dea546a is in state STARTED 2026-03-08 01:02:44.183908 | orchestrator | 2026-03-08 01:02:44 | INFO  | Task 2b771276-80e5-4c8d-8712-8a8fc1734b79 is in state STARTED 2026-03-08 01:02:44.183959 | orchestrator | 2026-03-08 01:02:44 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:02:47.238426 | orchestrator | 2026-03-08 01:02:47 | INFO  | Task f2b105d4-e6dc-4439-a5a6-c9e689c50315 is in state STARTED 2026-03-08 01:02:47.240465 | orchestrator | 2026-03-08 01:02:47 | INFO  | Task e2a04a5f-900a-439c-886a-67b04fff7b15 is in state STARTED 2026-03-08 01:02:47.243321 | orchestrator | 2026-03-08 01:02:47 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:02:47.246812 | orchestrator | 2026-03-08 01:02:47 | INFO  | Task 481c1a3a-0de1-45bb-a274-19d73dea546a is in state SUCCESS 2026-03-08 01:02:47.246960 | orchestrator | 2026-03-08 01:02:47.249347 | orchestrator | 2026-03-08 01:02:47.249398 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-08 01:02:47.249404 | orchestrator | 2026-03-08 01:02:47.249410 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-08 01:02:47.249416 | orchestrator | Sunday 08 March 2026 00:59:42 +0000 (0:00:00.513) 0:00:00.513 ********** 2026-03-08 01:02:47.249421 | orchestrator | ok: [testbed-node-0] 2026-03-08 01:02:47.249427 | orchestrator | ok: [testbed-node-1] 2026-03-08 01:02:47.249431 | orchestrator | ok: [testbed-node-2] 2026-03-08 01:02:47.249436 | orchestrator | 2026-03-08 01:02:47.249441 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-08 01:02:47.249446 | orchestrator | Sunday 08 March 2026 00:59:43 +0000 (0:00:00.494) 0:00:01.007 ********** 2026-03-08 01:02:47.249451 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-03-08 01:02:47.249456 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-03-08 01:02:47.249461 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-03-08 01:02:47.249466 | orchestrator | 2026-03-08 01:02:47.249471 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-03-08 01:02:47.249475 | orchestrator | 2026-03-08 01:02:47.249480 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-08 01:02:47.249539 | orchestrator | Sunday 08 March 2026 00:59:43 +0000 (0:00:00.531) 0:00:01.538 ********** 2026-03-08 01:02:47.249545 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 01:02:47.249550 | orchestrator | 2026-03-08 01:02:47.249554 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2026-03-08 01:02:47.249558 | orchestrator | Sunday 08 March 2026 00:59:44 +0000 (0:00:00.679) 0:00:02.217 ********** 2026-03-08 01:02:47.249562 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-03-08 01:02:47.249566 | orchestrator | 2026-03-08 01:02:47.249570 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2026-03-08 01:02:47.249573 | orchestrator | Sunday 08 March 2026 00:59:48 +0000 (0:00:03.978) 0:00:06.196 ********** 2026-03-08 01:02:47.249577 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-03-08 01:02:47.249582 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-03-08 01:02:47.249585 | orchestrator | 2026-03-08 01:02:47.249589 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-03-08 01:02:47.249610 | orchestrator | Sunday 08 March 2026 00:59:56 +0000 (0:00:07.765) 0:00:13.962 ********** 2026-03-08 01:02:47.249614 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-03-08 01:02:47.249618 | orchestrator | 2026-03-08 01:02:47.249622 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-03-08 01:02:47.249625 | orchestrator | Sunday 08 March 2026 01:00:00 +0000 (0:00:03.682) 0:00:17.644 ********** 2026-03-08 01:02:47.249629 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-03-08 01:02:47.249633 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-08 01:02:47.249637 | orchestrator | 2026-03-08 01:02:47.249640 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-03-08 01:02:47.249644 | orchestrator | Sunday 08 March 2026 01:00:04 +0000 (0:00:04.546) 0:00:22.191 ********** 2026-03-08 01:02:47.249648 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-08 01:02:47.249652 | orchestrator | 2026-03-08 01:02:47.249665 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2026-03-08 01:02:47.249670 | orchestrator | Sunday 08 March 2026 01:00:08 +0000 (0:00:03.709) 0:00:25.900 ********** 2026-03-08 01:02:47.249673 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-03-08 01:02:47.249677 | orchestrator | 2026-03-08 01:02:47.249687 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-03-08 01:02:47.249691 | orchestrator | Sunday 08 March 2026 01:00:12 +0000 (0:00:04.170) 0:00:30.070 ********** 2026-03-08 01:02:47.249698 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-08 01:02:47.249717 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-08 01:02:47.249725 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-08 01:02:47.249734 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-08 01:02:47.249740 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-08 01:02:47.249744 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-08 01:02:47.249749 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:47.249758 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:47.249765 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:47.249770 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:47.249778 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:47.249782 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:47.249786 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:47.249791 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:47.249799 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:47.249806 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:47.249813 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:47.249817 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:47.249821 | orchestrator | 2026-03-08 01:02:47.249825 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-03-08 01:02:47.249829 | orchestrator | Sunday 08 March 2026 01:00:15 +0000 (0:00:03.374) 0:00:33.445 ********** 2026-03-08 01:02:47.249833 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:02:47.249837 | orchestrator | 2026-03-08 01:02:47.249841 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-03-08 01:02:47.249845 | orchestrator | Sunday 08 March 2026 01:00:15 +0000 (0:00:00.124) 0:00:33.569 ********** 2026-03-08 01:02:47.249924 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:02:47.249929 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:02:47.249933 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:02:47.249936 | orchestrator | 2026-03-08 01:02:47.249940 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-08 01:02:47.249944 | orchestrator | Sunday 08 March 2026 01:00:16 +0000 (0:00:00.310) 0:00:33.880 ********** 2026-03-08 01:02:47.249948 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 01:02:47.249952 | orchestrator | 2026-03-08 01:02:47.249956 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-03-08 01:02:47.249960 | orchestrator | Sunday 08 March 2026 01:00:17 +0000 (0:00:00.721) 0:00:34.601 ********** 2026-03-08 01:02:47.249964 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-08 01:02:47.249972 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-08 01:02:47.250091 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-08 01:02:47.250096 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-08 01:02:47.250100 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-08 01:02:47.250104 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-08 01:02:47.250112 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:47.250120 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:47.250127 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:47.250131 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:47.250135 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:47.250139 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:47.250143 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:47.250150 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:47.250157 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:47.250167 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:47.250172 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:47.250176 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:47.250180 | orchestrator | 2026-03-08 01:02:47.250184 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-03-08 01:02:47.250188 | orchestrator | Sunday 08 March 2026 01:00:23 +0000 (0:00:06.372) 0:00:40.974 ********** 2026-03-08 01:02:47.250192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-08 01:02:47.250196 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-08 01:02:47.250205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-08 01:02:47.250212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-08 01:02:47.250217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-08 01:02:47.250221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-08 01:02:47.250225 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:02:47.250229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-08 01:02:47.250233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-08 01:02:47.250410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-08 01:02:47.250470 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-08 01:02:47.250477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-08 01:02:47.250482 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-08 01:02:47.250487 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:02:47.250493 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-08 01:02:47.250499 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-08 01:02:47.250529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-08 01:02:47.250534 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-08 01:02:47.250541 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-08 01:02:47.250546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-08 01:02:47.250550 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:02:47.250554 | orchestrator | 2026-03-08 01:02:47.250559 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-03-08 01:02:47.250563 | orchestrator | Sunday 08 March 2026 01:00:24 +0000 (0:00:00.700) 0:00:41.675 ********** 2026-03-08 01:02:47.250568 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-08 01:02:47.250576 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-08 01:02:47.250583 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-08 01:02:47.250588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-08 01:02:47.250596 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-08 01:02:47.250601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-08 01:02:47.250605 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:02:47.250610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-08 01:02:47.250617 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-08 01:02:47.250626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-08 01:02:47.250631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-08 01:02:47.250637 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-08 01:02:47.250642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-08 01:02:47.250646 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:02:47.250650 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-08 01:02:47.250658 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-08 01:02:47.250662 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-08 01:02:47.250669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-08 01:02:47.250677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-08 01:02:47.250681 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-08 01:02:47.250685 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:02:47.250689 | orchestrator | 2026-03-08 01:02:47.250694 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-03-08 01:02:47.250698 | orchestrator | Sunday 08 March 2026 01:00:25 +0000 (0:00:01.063) 0:00:42.739 ********** 2026-03-08 01:02:47.250702 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-08 01:02:47.250710 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-08 01:02:47.250718 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-08 01:02:47.250725 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-08 01:02:47.250731 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-08 01:02:47.250735 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-08 01:02:47.250743 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:47.250749 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:47.250756 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:47.250764 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:47.250775 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:47.250786 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:47.250793 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:47.250805 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:47.250813 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:47.250824 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:47.250831 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:47.250847 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:47.250854 | orchestrator | 2026-03-08 01:02:47.250862 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-03-08 01:02:47.250869 | orchestrator | Sunday 08 March 2026 01:00:32 +0000 (0:00:07.112) 0:00:49.851 ********** 2026-03-08 01:02:47.250877 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-08 01:02:47.250890 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-08 01:02:47.250895 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-08 01:02:47.250904 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-08 01:02:47.250911 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-08 01:02:47.250916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-08 01:02:47.250925 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:47.250929 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:47.250934 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:47.250942 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:47.250947 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:47.250954 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:47.250959 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:47.250966 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:47.250970 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:47.251025 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:47.251034 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:47.251039 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:47.251043 | orchestrator | 2026-03-08 01:02:47.251046 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-03-08 01:02:47.251050 | orchestrator | Sunday 08 March 2026 01:00:54 +0000 (0:00:22.190) 0:01:12.041 ********** 2026-03-08 01:02:47.251058 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-08 01:02:47.251063 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-08 01:02:47.251071 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-08 01:02:47.251075 | orchestrator | 2026-03-08 01:02:47.251079 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-03-08 01:02:47.251084 | orchestrator | Sunday 08 March 2026 01:01:01 +0000 (0:00:07.283) 0:01:19.325 ********** 2026-03-08 01:02:47.251088 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-08 01:02:47.251092 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-08 01:02:47.251095 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-08 01:02:47.251099 | orchestrator | 2026-03-08 01:02:47.251104 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-03-08 01:02:47.251107 | orchestrator | Sunday 08 March 2026 01:01:05 +0000 (0:00:03.724) 0:01:23.050 ********** 2026-03-08 01:02:47.251111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-08 01:02:47.251116 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-08 01:02:47.251126 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-08 01:02:47.251133 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-08 01:02:47.251142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-08 01:02:47.251147 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-08 01:02:47.251151 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-08 01:02:47.251156 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-08 01:02:47.251160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-08 01:02:47.251168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-08 01:02:47.251180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-08 01:02:47.251185 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-08 01:02:47.251190 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-08 01:02:47.251194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-08 01:02:47.251198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-08 01:02:47.251203 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:47.251211 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:47.251223 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:47.251228 | orchestrator | 2026-03-08 01:02:47.251232 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-03-08 01:02:47.251236 | orchestrator | Sunday 08 March 2026 01:01:08 +0000 (0:00:03.492) 0:01:26.542 ********** 2026-03-08 01:02:47.251242 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-08 01:02:47.251246 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-08 01:02:47.251251 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-08 01:02:47.251259 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-08 01:02:47.251273 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-08 01:02:47.251278 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-08 01:02:47.251282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-08 01:02:47.251287 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-08 01:02:47.251291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-08 01:02:47.251296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-08 01:02:47.251306 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-08 01:02:47.251315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-08 01:02:47.251320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-08 01:02:47.251324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-08 01:02:47.251328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-08 01:02:47.251332 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:47.251409 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:47.251465 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:47.251472 | orchestrator | 2026-03-08 01:02:47.251477 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-08 01:02:47.251481 | orchestrator | Sunday 08 March 2026 01:01:12 +0000 (0:00:03.380) 0:01:29.923 ********** 2026-03-08 01:02:47.251485 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:02:47.251497 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:02:47.251501 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:02:47.251505 | orchestrator | 2026-03-08 01:02:47.251510 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-03-08 01:02:47.251514 | orchestrator | Sunday 08 March 2026 01:01:13 +0000 (0:00:00.920) 0:01:30.843 ********** 2026-03-08 01:02:47.251518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-08 01:02:47.251523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-08 01:02:47.251528 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-08 01:02:47.251533 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-08 01:02:47.251550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-08 01:02:47.251557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-08 01:02:47.251562 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-08 01:02:47.251566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-08 01:02:47.251570 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:02:47.251575 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-08 01:02:47.251579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-08 01:02:47.251591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-08 01:02:47.251595 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-08 01:02:47.251599 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:02:47.251606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-08 01:02:47.251611 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-08 01:02:47.251615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-08 01:02:47.251619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-08 01:02:47.251627 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-08 01:02:47.251634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-08 01:02:47.251639 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:02:47.251643 | orchestrator | 2026-03-08 01:02:47.251648 | orchestrator | TASK [designate : Check designate containers] ********************************** 2026-03-08 01:02:47.251652 | orchestrator | Sunday 08 March 2026 01:01:14 +0000 (0:00:00.951) 0:01:31.795 ********** 2026-03-08 01:02:47.251659 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-08 01:02:47.251664 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-08 01:02:47.251669 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-08 01:02:47.251677 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-08 01:02:47.251684 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-08 01:02:47.251691 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-08 01:02:47.251696 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:47.251700 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:47.251704 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:47.251712 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:47.251719 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:47.251724 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:47.251733 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:47.251739 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:47.251743 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:47.251751 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:47.251756 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:47.251764 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:02:47.251768 | orchestrator | 2026-03-08 01:02:47.251772 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-08 01:02:47.251777 | orchestrator | Sunday 08 March 2026 01:01:19 +0000 (0:00:05.542) 0:01:37.338 ********** 2026-03-08 01:02:47.251781 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:02:47.251785 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:02:47.251789 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:02:47.251793 | orchestrator | 2026-03-08 01:02:47.251797 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-03-08 01:02:47.251802 | orchestrator | Sunday 08 March 2026 01:01:20 +0000 (0:00:00.295) 0:01:37.634 ********** 2026-03-08 01:02:47.251806 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-03-08 01:02:47.251810 | orchestrator | 2026-03-08 01:02:47.251814 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-03-08 01:02:47.251819 | orchestrator | Sunday 08 March 2026 01:01:22 +0000 (0:00:02.502) 0:01:40.136 ********** 2026-03-08 01:02:47.251823 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-08 01:02:47.251827 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-03-08 01:02:47.251831 | orchestrator | 2026-03-08 01:02:47.251839 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-03-08 01:02:47.251843 | orchestrator | Sunday 08 March 2026 01:01:25 +0000 (0:00:02.598) 0:01:42.735 ********** 2026-03-08 01:02:47.251847 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:02:47.251851 | orchestrator | 2026-03-08 01:02:47.251855 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-08 01:02:47.251860 | orchestrator | Sunday 08 March 2026 01:01:41 +0000 (0:00:16.472) 0:01:59.208 ********** 2026-03-08 01:02:47.251864 | orchestrator | 2026-03-08 01:02:47.251868 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-08 01:02:47.251872 | orchestrator | Sunday 08 March 2026 01:01:41 +0000 (0:00:00.127) 0:01:59.336 ********** 2026-03-08 01:02:47.251876 | orchestrator | 2026-03-08 01:02:47.251880 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-08 01:02:47.251888 | orchestrator | Sunday 08 March 2026 01:01:41 +0000 (0:00:00.138) 0:01:59.474 ********** 2026-03-08 01:02:47.251892 | orchestrator | 2026-03-08 01:02:47.251897 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-03-08 01:02:47.251901 | orchestrator | Sunday 08 March 2026 01:01:42 +0000 (0:00:00.124) 0:01:59.599 ********** 2026-03-08 01:02:47.251905 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:02:47.251909 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:02:47.251913 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:02:47.251917 | orchestrator | 2026-03-08 01:02:47.251921 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-03-08 01:02:47.251925 | orchestrator | Sunday 08 March 2026 01:01:55 +0000 (0:00:13.238) 0:02:12.837 ********** 2026-03-08 01:02:47.251930 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:02:47.251934 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:02:47.251938 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:02:47.251943 | orchestrator | 2026-03-08 01:02:47.251947 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-03-08 01:02:47.251951 | orchestrator | Sunday 08 March 2026 01:02:01 +0000 (0:00:06.636) 0:02:19.474 ********** 2026-03-08 01:02:47.251955 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:02:47.251959 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:02:47.251964 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:02:47.251970 | orchestrator | 2026-03-08 01:02:47.252090 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-03-08 01:02:47.252099 | orchestrator | Sunday 08 March 2026 01:02:10 +0000 (0:00:08.542) 0:02:28.016 ********** 2026-03-08 01:02:47.252107 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:02:47.252115 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:02:47.252120 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:02:47.252125 | orchestrator | 2026-03-08 01:02:47.252129 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-03-08 01:02:47.252134 | orchestrator | Sunday 08 March 2026 01:02:16 +0000 (0:00:05.573) 0:02:33.590 ********** 2026-03-08 01:02:47.252139 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:02:47.252143 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:02:47.252148 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:02:47.252152 | orchestrator | 2026-03-08 01:02:47.252156 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-03-08 01:02:47.252161 | orchestrator | Sunday 08 March 2026 01:02:26 +0000 (0:00:10.551) 0:02:44.141 ********** 2026-03-08 01:02:47.252166 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:02:47.252170 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:02:47.252175 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:02:47.252179 | orchestrator | 2026-03-08 01:02:47.252184 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-03-08 01:02:47.252189 | orchestrator | Sunday 08 March 2026 01:02:36 +0000 (0:00:10.381) 0:02:54.522 ********** 2026-03-08 01:02:47.252193 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:02:47.252197 | orchestrator | 2026-03-08 01:02:47.252202 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 01:02:47.252207 | orchestrator | testbed-node-0 : ok=29  changed=24  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-08 01:02:47.252213 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-08 01:02:47.252217 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-08 01:02:47.252222 | orchestrator | 2026-03-08 01:02:47.252227 | orchestrator | 2026-03-08 01:02:47.252237 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 01:02:47.252248 | orchestrator | Sunday 08 March 2026 01:02:45 +0000 (0:00:08.393) 0:03:02.916 ********** 2026-03-08 01:02:47.252252 | orchestrator | =============================================================================== 2026-03-08 01:02:47.252257 | orchestrator | designate : Copying over designate.conf -------------------------------- 22.19s 2026-03-08 01:02:47.252261 | orchestrator | designate : Running Designate bootstrap container ---------------------- 16.47s 2026-03-08 01:02:47.252267 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 13.24s 2026-03-08 01:02:47.252271 | orchestrator | designate : Restart designate-mdns container --------------------------- 10.55s 2026-03-08 01:02:47.252276 | orchestrator | designate : Restart designate-worker container ------------------------- 10.38s 2026-03-08 01:02:47.252280 | orchestrator | designate : Restart designate-central container ------------------------- 8.54s 2026-03-08 01:02:47.252285 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 8.39s 2026-03-08 01:02:47.252290 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 7.77s 2026-03-08 01:02:47.252294 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 7.28s 2026-03-08 01:02:47.252303 | orchestrator | designate : Copying over config.json files for services ----------------- 7.11s 2026-03-08 01:02:47.252307 | orchestrator | designate : Restart designate-api container ----------------------------- 6.64s 2026-03-08 01:02:47.252311 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.37s 2026-03-08 01:02:47.252315 | orchestrator | designate : Restart designate-producer container ------------------------ 5.57s 2026-03-08 01:02:47.252318 | orchestrator | designate : Check designate containers ---------------------------------- 5.54s 2026-03-08 01:02:47.252322 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.55s 2026-03-08 01:02:47.252326 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.17s 2026-03-08 01:02:47.252330 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.98s 2026-03-08 01:02:47.252333 | orchestrator | designate : Copying over named.conf ------------------------------------- 3.72s 2026-03-08 01:02:47.252337 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.71s 2026-03-08 01:02:47.252341 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.68s 2026-03-08 01:02:47.252346 | orchestrator | 2026-03-08 01:02:47 | INFO  | Task 2b771276-80e5-4c8d-8712-8a8fc1734b79 is in state STARTED 2026-03-08 01:02:47.252350 | orchestrator | 2026-03-08 01:02:47 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:02:50.307414 | orchestrator | 2026-03-08 01:02:50 | INFO  | Task f2b105d4-e6dc-4439-a5a6-c9e689c50315 is in state STARTED 2026-03-08 01:02:50.310171 | orchestrator | 2026-03-08 01:02:50 | INFO  | Task e2a04a5f-900a-439c-886a-67b04fff7b15 is in state STARTED 2026-03-08 01:02:50.313214 | orchestrator | 2026-03-08 01:02:50 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:02:50.315783 | orchestrator | 2026-03-08 01:02:50 | INFO  | Task 2b771276-80e5-4c8d-8712-8a8fc1734b79 is in state STARTED 2026-03-08 01:02:50.316857 | orchestrator | 2026-03-08 01:02:50 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:02:53.365227 | orchestrator | 2026-03-08 01:02:53 | INFO  | Task f2b105d4-e6dc-4439-a5a6-c9e689c50315 is in state STARTED 2026-03-08 01:02:53.365911 | orchestrator | 2026-03-08 01:02:53 | INFO  | Task e2a04a5f-900a-439c-886a-67b04fff7b15 is in state STARTED 2026-03-08 01:02:53.367318 | orchestrator | 2026-03-08 01:02:53 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:02:53.370765 | orchestrator | 2026-03-08 01:02:53 | INFO  | Task 2b771276-80e5-4c8d-8712-8a8fc1734b79 is in state STARTED 2026-03-08 01:02:53.370815 | orchestrator | 2026-03-08 01:02:53 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:02:56.411918 | orchestrator | 2026-03-08 01:02:56 | INFO  | Task f2b105d4-e6dc-4439-a5a6-c9e689c50315 is in state STARTED 2026-03-08 01:02:56.413512 | orchestrator | 2026-03-08 01:02:56 | INFO  | Task e2a04a5f-900a-439c-886a-67b04fff7b15 is in state STARTED 2026-03-08 01:02:56.413987 | orchestrator | 2026-03-08 01:02:56 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:02:56.414717 | orchestrator | 2026-03-08 01:02:56 | INFO  | Task 2b771276-80e5-4c8d-8712-8a8fc1734b79 is in state STARTED 2026-03-08 01:02:56.414763 | orchestrator | 2026-03-08 01:02:56 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:02:59.452267 | orchestrator | 2026-03-08 01:02:59 | INFO  | Task f2b105d4-e6dc-4439-a5a6-c9e689c50315 is in state STARTED 2026-03-08 01:02:59.454622 | orchestrator | 2026-03-08 01:02:59 | INFO  | Task e2a04a5f-900a-439c-886a-67b04fff7b15 is in state STARTED 2026-03-08 01:02:59.456414 | orchestrator | 2026-03-08 01:02:59 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:02:59.459482 | orchestrator | 2026-03-08 01:02:59 | INFO  | Task 2b771276-80e5-4c8d-8712-8a8fc1734b79 is in state STARTED 2026-03-08 01:02:59.459747 | orchestrator | 2026-03-08 01:02:59 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:03:02.510174 | orchestrator | 2026-03-08 01:03:02 | INFO  | Task f2b105d4-e6dc-4439-a5a6-c9e689c50315 is in state STARTED 2026-03-08 01:03:02.511767 | orchestrator | 2026-03-08 01:03:02 | INFO  | Task e2a04a5f-900a-439c-886a-67b04fff7b15 is in state STARTED 2026-03-08 01:03:02.513423 | orchestrator | 2026-03-08 01:03:02 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:03:02.515390 | orchestrator | 2026-03-08 01:03:02 | INFO  | Task 2b771276-80e5-4c8d-8712-8a8fc1734b79 is in state STARTED 2026-03-08 01:03:02.515583 | orchestrator | 2026-03-08 01:03:02 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:03:05.558263 | orchestrator | 2026-03-08 01:03:05 | INFO  | Task f2b105d4-e6dc-4439-a5a6-c9e689c50315 is in state STARTED 2026-03-08 01:03:05.560643 | orchestrator | 2026-03-08 01:03:05 | INFO  | Task e2a04a5f-900a-439c-886a-67b04fff7b15 is in state STARTED 2026-03-08 01:03:05.562194 | orchestrator | 2026-03-08 01:03:05 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:03:05.563866 | orchestrator | 2026-03-08 01:03:05 | INFO  | Task 2b771276-80e5-4c8d-8712-8a8fc1734b79 is in state STARTED 2026-03-08 01:03:05.564023 | orchestrator | 2026-03-08 01:03:05 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:03:08.611649 | orchestrator | 2026-03-08 01:03:08 | INFO  | Task f2b105d4-e6dc-4439-a5a6-c9e689c50315 is in state STARTED 2026-03-08 01:03:08.613743 | orchestrator | 2026-03-08 01:03:08 | INFO  | Task e2a04a5f-900a-439c-886a-67b04fff7b15 is in state STARTED 2026-03-08 01:03:08.615995 | orchestrator | 2026-03-08 01:03:08 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:03:08.617500 | orchestrator | 2026-03-08 01:03:08 | INFO  | Task 2b771276-80e5-4c8d-8712-8a8fc1734b79 is in state STARTED 2026-03-08 01:03:08.617528 | orchestrator | 2026-03-08 01:03:08 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:03:11.654712 | orchestrator | 2026-03-08 01:03:11 | INFO  | Task f2b105d4-e6dc-4439-a5a6-c9e689c50315 is in state STARTED 2026-03-08 01:03:11.655569 | orchestrator | 2026-03-08 01:03:11 | INFO  | Task e2a04a5f-900a-439c-886a-67b04fff7b15 is in state STARTED 2026-03-08 01:03:11.656657 | orchestrator | 2026-03-08 01:03:11 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:03:11.657717 | orchestrator | 2026-03-08 01:03:11 | INFO  | Task 2b771276-80e5-4c8d-8712-8a8fc1734b79 is in state STARTED 2026-03-08 01:03:11.657856 | orchestrator | 2026-03-08 01:03:11 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:03:14.689968 | orchestrator | 2026-03-08 01:03:14 | INFO  | Task f2b105d4-e6dc-4439-a5a6-c9e689c50315 is in state STARTED 2026-03-08 01:03:14.692677 | orchestrator | 2026-03-08 01:03:14 | INFO  | Task e2a04a5f-900a-439c-886a-67b04fff7b15 is in state STARTED 2026-03-08 01:03:14.693498 | orchestrator | 2026-03-08 01:03:14 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:03:14.694989 | orchestrator | 2026-03-08 01:03:14.695048 | orchestrator | 2026-03-08 01:03:14 | INFO  | Task 2b771276-80e5-4c8d-8712-8a8fc1734b79 is in state SUCCESS 2026-03-08 01:03:14.696314 | orchestrator | 2026-03-08 01:03:14.696336 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-08 01:03:14.696340 | orchestrator | 2026-03-08 01:03:14.696344 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-08 01:03:14.696349 | orchestrator | Sunday 08 March 2026 01:02:01 +0000 (0:00:00.233) 0:00:00.233 ********** 2026-03-08 01:03:14.696353 | orchestrator | ok: [testbed-node-0] 2026-03-08 01:03:14.696357 | orchestrator | ok: [testbed-node-1] 2026-03-08 01:03:14.696361 | orchestrator | ok: [testbed-node-2] 2026-03-08 01:03:14.696365 | orchestrator | 2026-03-08 01:03:14.696369 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-08 01:03:14.696373 | orchestrator | Sunday 08 March 2026 01:02:02 +0000 (0:00:00.285) 0:00:00.518 ********** 2026-03-08 01:03:14.696377 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-03-08 01:03:14.696381 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-03-08 01:03:14.696385 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-03-08 01:03:14.696389 | orchestrator | 2026-03-08 01:03:14.696392 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-03-08 01:03:14.696396 | orchestrator | 2026-03-08 01:03:14.696400 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-08 01:03:14.696407 | orchestrator | Sunday 08 March 2026 01:02:02 +0000 (0:00:00.732) 0:00:01.251 ********** 2026-03-08 01:03:14.696413 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 01:03:14.696419 | orchestrator | 2026-03-08 01:03:14.696429 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2026-03-08 01:03:14.696436 | orchestrator | Sunday 08 March 2026 01:02:03 +0000 (0:00:00.564) 0:00:01.815 ********** 2026-03-08 01:03:14.696442 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-03-08 01:03:14.696448 | orchestrator | 2026-03-08 01:03:14.696454 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2026-03-08 01:03:14.696460 | orchestrator | Sunday 08 March 2026 01:02:07 +0000 (0:00:03.883) 0:00:05.699 ********** 2026-03-08 01:03:14.696466 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-03-08 01:03:14.696471 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-03-08 01:03:14.696477 | orchestrator | 2026-03-08 01:03:14.696483 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-03-08 01:03:14.696489 | orchestrator | Sunday 08 March 2026 01:02:14 +0000 (0:00:07.559) 0:00:13.258 ********** 2026-03-08 01:03:14.696496 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-08 01:03:14.696502 | orchestrator | 2026-03-08 01:03:14.696509 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-03-08 01:03:14.696526 | orchestrator | Sunday 08 March 2026 01:02:18 +0000 (0:00:03.536) 0:00:16.795 ********** 2026-03-08 01:03:14.696533 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-03-08 01:03:14.696539 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-08 01:03:14.696559 | orchestrator | 2026-03-08 01:03:14.696563 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-03-08 01:03:14.696567 | orchestrator | Sunday 08 March 2026 01:02:22 +0000 (0:00:04.543) 0:00:21.338 ********** 2026-03-08 01:03:14.696571 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-08 01:03:14.696574 | orchestrator | 2026-03-08 01:03:14.696578 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2026-03-08 01:03:14.696582 | orchestrator | Sunday 08 March 2026 01:02:27 +0000 (0:00:04.190) 0:00:25.529 ********** 2026-03-08 01:03:14.696587 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-03-08 01:03:14.696593 | orchestrator | 2026-03-08 01:03:14.696602 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-08 01:03:14.696608 | orchestrator | Sunday 08 March 2026 01:02:30 +0000 (0:00:03.495) 0:00:29.025 ********** 2026-03-08 01:03:14.696614 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:03:14.696620 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:03:14.696627 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:03:14.696633 | orchestrator | 2026-03-08 01:03:14.696640 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-03-08 01:03:14.696646 | orchestrator | Sunday 08 March 2026 01:02:30 +0000 (0:00:00.308) 0:00:29.334 ********** 2026-03-08 01:03:14.696655 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-08 01:03:14.696670 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-08 01:03:14.696675 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-08 01:03:14.696683 | orchestrator | 2026-03-08 01:03:14.696687 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-03-08 01:03:14.696691 | orchestrator | Sunday 08 March 2026 01:02:31 +0000 (0:00:00.752) 0:00:30.086 ********** 2026-03-08 01:03:14.696695 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:03:14.696699 | orchestrator | 2026-03-08 01:03:14.696702 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-03-08 01:03:14.696706 | orchestrator | Sunday 08 March 2026 01:02:31 +0000 (0:00:00.132) 0:00:30.218 ********** 2026-03-08 01:03:14.696713 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:03:14.696717 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:03:14.696720 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:03:14.696724 | orchestrator | 2026-03-08 01:03:14.696728 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-08 01:03:14.696732 | orchestrator | Sunday 08 March 2026 01:02:32 +0000 (0:00:00.416) 0:00:30.635 ********** 2026-03-08 01:03:14.696735 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 01:03:14.696739 | orchestrator | 2026-03-08 01:03:14.696743 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-03-08 01:03:14.696747 | orchestrator | Sunday 08 March 2026 01:02:32 +0000 (0:00:00.475) 0:00:31.110 ********** 2026-03-08 01:03:14.696751 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-08 01:03:14.696758 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-08 01:03:14.696762 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-08 01:03:14.696769 | orchestrator | 2026-03-08 01:03:14.696773 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-03-08 01:03:14.696777 | orchestrator | Sunday 08 March 2026 01:02:34 +0000 (0:00:01.436) 0:00:32.546 ********** 2026-03-08 01:03:14.696783 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-08 01:03:14.696787 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:03:14.696791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-08 01:03:14.696795 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:03:14.696802 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-08 01:03:14.696809 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:03:14.696815 | orchestrator | 2026-03-08 01:03:14.696821 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-03-08 01:03:14.696826 | orchestrator | Sunday 08 March 2026 01:02:34 +0000 (0:00:00.716) 0:00:33.262 ********** 2026-03-08 01:03:14.696832 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-08 01:03:14.696842 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:03:14.696851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-08 01:03:14.696856 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:03:14.696862 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-08 01:03:14.696867 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:03:14.696873 | orchestrator | 2026-03-08 01:03:14.696879 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-03-08 01:03:14.696884 | orchestrator | Sunday 08 March 2026 01:02:35 +0000 (0:00:00.735) 0:00:33.997 ********** 2026-03-08 01:03:14.696893 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-08 01:03:14.696899 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-08 01:03:14.696909 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-08 01:03:14.696967 | orchestrator | 2026-03-08 01:03:14.696976 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-03-08 01:03:14.696981 | orchestrator | Sunday 08 March 2026 01:02:36 +0000 (0:00:01.430) 0:00:35.428 ********** 2026-03-08 01:03:14.696991 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-08 01:03:14.696998 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-08 01:03:14.697010 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-08 01:03:14.697024 | orchestrator | 2026-03-08 01:03:14.697030 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-03-08 01:03:14.697036 | orchestrator | Sunday 08 March 2026 01:02:39 +0000 (0:00:02.681) 0:00:38.109 ********** 2026-03-08 01:03:14.697042 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-08 01:03:14.697048 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-08 01:03:14.697054 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-08 01:03:14.697060 | orchestrator | 2026-03-08 01:03:14.697066 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-03-08 01:03:14.697072 | orchestrator | Sunday 08 March 2026 01:02:41 +0000 (0:00:01.551) 0:00:39.660 ********** 2026-03-08 01:03:14.697078 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:03:14.697084 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:03:14.697091 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:03:14.697097 | orchestrator | 2026-03-08 01:03:14.697103 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-03-08 01:03:14.697110 | orchestrator | Sunday 08 March 2026 01:02:42 +0000 (0:00:01.470) 0:00:41.130 ********** 2026-03-08 01:03:14.697119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-08 01:03:14.697126 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:03:14.697133 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-08 01:03:14.697140 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:03:14.697151 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-08 01:03:14.697162 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:03:14.697169 | orchestrator | 2026-03-08 01:03:14.697175 | orchestrator | TASK [placement : Check placement containers] ********************************** 2026-03-08 01:03:14.697182 | orchestrator | Sunday 08 March 2026 01:02:43 +0000 (0:00:00.524) 0:00:41.655 ********** 2026-03-08 01:03:14.697189 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-08 01:03:14.697199 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-08 01:03:14.697207 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-08 01:03:14.697214 | orchestrator | 2026-03-08 01:03:14.697220 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-03-08 01:03:14.697227 | orchestrator | Sunday 08 March 2026 01:02:44 +0000 (0:00:01.254) 0:00:42.909 ********** 2026-03-08 01:03:14.697233 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:03:14.697246 | orchestrator | 2026-03-08 01:03:14.697252 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-03-08 01:03:14.697258 | orchestrator | Sunday 08 March 2026 01:02:47 +0000 (0:00:02.835) 0:00:45.744 ********** 2026-03-08 01:03:14.697263 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:03:14.697269 | orchestrator | 2026-03-08 01:03:14.697275 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-03-08 01:03:14.697281 | orchestrator | Sunday 08 March 2026 01:02:49 +0000 (0:00:02.483) 0:00:48.227 ********** 2026-03-08 01:03:14.697288 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:03:14.697294 | orchestrator | 2026-03-08 01:03:14.697299 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-08 01:03:14.697305 | orchestrator | Sunday 08 March 2026 01:03:03 +0000 (0:00:13.802) 0:01:02.030 ********** 2026-03-08 01:03:14.697311 | orchestrator | 2026-03-08 01:03:14.697318 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-08 01:03:14.697324 | orchestrator | Sunday 08 March 2026 01:03:03 +0000 (0:00:00.076) 0:01:02.107 ********** 2026-03-08 01:03:14.697330 | orchestrator | 2026-03-08 01:03:14.697341 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-08 01:03:14.697348 | orchestrator | Sunday 08 March 2026 01:03:03 +0000 (0:00:00.064) 0:01:02.171 ********** 2026-03-08 01:03:14.697354 | orchestrator | 2026-03-08 01:03:14.697360 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-03-08 01:03:14.697366 | orchestrator | Sunday 08 March 2026 01:03:03 +0000 (0:00:00.083) 0:01:02.255 ********** 2026-03-08 01:03:14.697372 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:03:14.697379 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:03:14.697385 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:03:14.697391 | orchestrator | 2026-03-08 01:03:14.697397 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 01:03:14.697404 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-08 01:03:14.697412 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-08 01:03:14.697418 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-08 01:03:14.697424 | orchestrator | 2026-03-08 01:03:14.697431 | orchestrator | 2026-03-08 01:03:14.697437 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 01:03:14.697443 | orchestrator | Sunday 08 March 2026 01:03:13 +0000 (0:00:10.052) 0:01:12.307 ********** 2026-03-08 01:03:14.697450 | orchestrator | =============================================================================== 2026-03-08 01:03:14.697456 | orchestrator | placement : Running placement bootstrap container ---------------------- 13.80s 2026-03-08 01:03:14.697463 | orchestrator | placement : Restart placement-api container ---------------------------- 10.05s 2026-03-08 01:03:14.697469 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 7.56s 2026-03-08 01:03:14.697475 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.54s 2026-03-08 01:03:14.697482 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 4.19s 2026-03-08 01:03:14.697489 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.88s 2026-03-08 01:03:14.697495 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.54s 2026-03-08 01:03:14.697501 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 3.50s 2026-03-08 01:03:14.697507 | orchestrator | placement : Creating placement databases -------------------------------- 2.84s 2026-03-08 01:03:14.697513 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.68s 2026-03-08 01:03:14.697524 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.48s 2026-03-08 01:03:14.697536 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.55s 2026-03-08 01:03:14.697543 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.47s 2026-03-08 01:03:14.697549 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.44s 2026-03-08 01:03:14.697555 | orchestrator | placement : Copying over config.json files for services ----------------- 1.43s 2026-03-08 01:03:14.697562 | orchestrator | placement : Check placement containers ---------------------------------- 1.25s 2026-03-08 01:03:14.697568 | orchestrator | placement : Ensuring config directories exist --------------------------- 0.75s 2026-03-08 01:03:14.697575 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.73s 2026-03-08 01:03:14.697581 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.73s 2026-03-08 01:03:14.697587 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 0.72s 2026-03-08 01:03:14.697594 | orchestrator | 2026-03-08 01:03:14 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:03:17.725213 | orchestrator | 2026-03-08 01:03:17 | INFO  | Task fc000a23-303c-4167-8b77-08f425647f27 is in state STARTED 2026-03-08 01:03:17.727755 | orchestrator | 2026-03-08 01:03:17 | INFO  | Task f2b105d4-e6dc-4439-a5a6-c9e689c50315 is in state STARTED 2026-03-08 01:03:17.728890 | orchestrator | 2026-03-08 01:03:17 | INFO  | Task e2a04a5f-900a-439c-886a-67b04fff7b15 is in state STARTED 2026-03-08 01:03:17.732988 | orchestrator | 2026-03-08 01:03:17 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:03:17.733062 | orchestrator | 2026-03-08 01:03:17 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:03:20.778079 | orchestrator | 2026-03-08 01:03:20 | INFO  | Task fc000a23-303c-4167-8b77-08f425647f27 is in state STARTED 2026-03-08 01:03:20.779712 | orchestrator | 2026-03-08 01:03:20 | INFO  | Task f2b105d4-e6dc-4439-a5a6-c9e689c50315 is in state STARTED 2026-03-08 01:03:20.781176 | orchestrator | 2026-03-08 01:03:20 | INFO  | Task e2a04a5f-900a-439c-886a-67b04fff7b15 is in state STARTED 2026-03-08 01:03:20.782596 | orchestrator | 2026-03-08 01:03:20 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:03:20.782629 | orchestrator | 2026-03-08 01:03:20 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:03:23.820272 | orchestrator | 2026-03-08 01:03:23 | INFO  | Task fc000a23-303c-4167-8b77-08f425647f27 is in state SUCCESS 2026-03-08 01:03:23.821485 | orchestrator | 2026-03-08 01:03:23 | INFO  | Task f2b105d4-e6dc-4439-a5a6-c9e689c50315 is in state STARTED 2026-03-08 01:03:23.823421 | orchestrator | 2026-03-08 01:03:23 | INFO  | Task e2a04a5f-900a-439c-886a-67b04fff7b15 is in state STARTED 2026-03-08 01:03:23.824700 | orchestrator | 2026-03-08 01:03:23 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:03:23.826678 | orchestrator | 2026-03-08 01:03:23 | INFO  | Task 20eb2391-c27d-419d-9f33-cf74508326df is in state STARTED 2026-03-08 01:03:23.826711 | orchestrator | 2026-03-08 01:03:23 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:03:26.882648 | orchestrator | 2026-03-08 01:03:26 | INFO  | Task f2b105d4-e6dc-4439-a5a6-c9e689c50315 is in state STARTED 2026-03-08 01:03:26.884425 | orchestrator | 2026-03-08 01:03:26 | INFO  | Task e2a04a5f-900a-439c-886a-67b04fff7b15 is in state STARTED 2026-03-08 01:03:26.886275 | orchestrator | 2026-03-08 01:03:26 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:03:26.887434 | orchestrator | 2026-03-08 01:03:26 | INFO  | Task 20eb2391-c27d-419d-9f33-cf74508326df is in state STARTED 2026-03-08 01:03:26.887476 | orchestrator | 2026-03-08 01:03:26 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:03:29.926209 | orchestrator | 2026-03-08 01:03:29 | INFO  | Task f2b105d4-e6dc-4439-a5a6-c9e689c50315 is in state STARTED 2026-03-08 01:03:29.926294 | orchestrator | 2026-03-08 01:03:29 | INFO  | Task e2a04a5f-900a-439c-886a-67b04fff7b15 is in state STARTED 2026-03-08 01:03:29.930258 | orchestrator | 2026-03-08 01:03:29 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:03:29.930355 | orchestrator | 2026-03-08 01:03:29 | INFO  | Task 20eb2391-c27d-419d-9f33-cf74508326df is in state STARTED 2026-03-08 01:03:29.930368 | orchestrator | 2026-03-08 01:03:29 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:03:32.979822 | orchestrator | 2026-03-08 01:03:32 | INFO  | Task f2b105d4-e6dc-4439-a5a6-c9e689c50315 is in state STARTED 2026-03-08 01:03:32.980389 | orchestrator | 2026-03-08 01:03:32 | INFO  | Task e2a04a5f-900a-439c-886a-67b04fff7b15 is in state STARTED 2026-03-08 01:03:32.985907 | orchestrator | 2026-03-08 01:03:32 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:03:32.986292 | orchestrator | 2026-03-08 01:03:32 | INFO  | Task 20eb2391-c27d-419d-9f33-cf74508326df is in state STARTED 2026-03-08 01:03:32.986860 | orchestrator | 2026-03-08 01:03:32 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:03:36.045275 | orchestrator | 2026-03-08 01:03:36 | INFO  | Task f2b105d4-e6dc-4439-a5a6-c9e689c50315 is in state STARTED 2026-03-08 01:03:36.046095 | orchestrator | 2026-03-08 01:03:36 | INFO  | Task e2a04a5f-900a-439c-886a-67b04fff7b15 is in state STARTED 2026-03-08 01:03:36.047189 | orchestrator | 2026-03-08 01:03:36 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:03:36.048636 | orchestrator | 2026-03-08 01:03:36 | INFO  | Task 20eb2391-c27d-419d-9f33-cf74508326df is in state STARTED 2026-03-08 01:03:36.048656 | orchestrator | 2026-03-08 01:03:36 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:03:39.093025 | orchestrator | 2026-03-08 01:03:39 | INFO  | Task f2b105d4-e6dc-4439-a5a6-c9e689c50315 is in state STARTED 2026-03-08 01:03:39.093611 | orchestrator | 2026-03-08 01:03:39 | INFO  | Task e2a04a5f-900a-439c-886a-67b04fff7b15 is in state STARTED 2026-03-08 01:03:39.096334 | orchestrator | 2026-03-08 01:03:39 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:03:39.098908 | orchestrator | 2026-03-08 01:03:39 | INFO  | Task 20eb2391-c27d-419d-9f33-cf74508326df is in state STARTED 2026-03-08 01:03:39.099185 | orchestrator | 2026-03-08 01:03:39 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:03:42.132007 | orchestrator | 2026-03-08 01:03:42 | INFO  | Task f2b105d4-e6dc-4439-a5a6-c9e689c50315 is in state STARTED 2026-03-08 01:03:42.132306 | orchestrator | 2026-03-08 01:03:42 | INFO  | Task e2a04a5f-900a-439c-886a-67b04fff7b15 is in state STARTED 2026-03-08 01:03:42.132831 | orchestrator | 2026-03-08 01:03:42 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:03:42.133530 | orchestrator | 2026-03-08 01:03:42 | INFO  | Task 20eb2391-c27d-419d-9f33-cf74508326df is in state STARTED 2026-03-08 01:03:42.133726 | orchestrator | 2026-03-08 01:03:42 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:03:45.177683 | orchestrator | 2026-03-08 01:03:45 | INFO  | Task f2b105d4-e6dc-4439-a5a6-c9e689c50315 is in state STARTED 2026-03-08 01:03:45.177744 | orchestrator | 2026-03-08 01:03:45 | INFO  | Task e2a04a5f-900a-439c-886a-67b04fff7b15 is in state STARTED 2026-03-08 01:03:45.177754 | orchestrator | 2026-03-08 01:03:45 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:03:45.177770 | orchestrator | 2026-03-08 01:03:45 | INFO  | Task 20eb2391-c27d-419d-9f33-cf74508326df is in state STARTED 2026-03-08 01:03:45.177774 | orchestrator | 2026-03-08 01:03:45 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:03:48.212021 | orchestrator | 2026-03-08 01:03:48 | INFO  | Task f2b105d4-e6dc-4439-a5a6-c9e689c50315 is in state STARTED 2026-03-08 01:03:48.213609 | orchestrator | 2026-03-08 01:03:48 | INFO  | Task e2a04a5f-900a-439c-886a-67b04fff7b15 is in state STARTED 2026-03-08 01:03:48.214730 | orchestrator | 2026-03-08 01:03:48 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:03:48.216109 | orchestrator | 2026-03-08 01:03:48 | INFO  | Task 20eb2391-c27d-419d-9f33-cf74508326df is in state STARTED 2026-03-08 01:03:48.216192 | orchestrator | 2026-03-08 01:03:48 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:03:51.256695 | orchestrator | 2026-03-08 01:03:51 | INFO  | Task f2b105d4-e6dc-4439-a5a6-c9e689c50315 is in state STARTED 2026-03-08 01:03:51.256811 | orchestrator | 2026-03-08 01:03:51 | INFO  | Task e2a04a5f-900a-439c-886a-67b04fff7b15 is in state STARTED 2026-03-08 01:03:51.257408 | orchestrator | 2026-03-08 01:03:51 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:03:51.258083 | orchestrator | 2026-03-08 01:03:51 | INFO  | Task 20eb2391-c27d-419d-9f33-cf74508326df is in state STARTED 2026-03-08 01:03:51.258119 | orchestrator | 2026-03-08 01:03:51 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:03:54.296514 | orchestrator | 2026-03-08 01:03:54 | INFO  | Task f2b105d4-e6dc-4439-a5a6-c9e689c50315 is in state STARTED 2026-03-08 01:03:54.296671 | orchestrator | 2026-03-08 01:03:54 | INFO  | Task e2a04a5f-900a-439c-886a-67b04fff7b15 is in state STARTED 2026-03-08 01:03:54.298535 | orchestrator | 2026-03-08 01:03:54 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:03:54.298800 | orchestrator | 2026-03-08 01:03:54 | INFO  | Task 20eb2391-c27d-419d-9f33-cf74508326df is in state STARTED 2026-03-08 01:03:54.298899 | orchestrator | 2026-03-08 01:03:54 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:03:57.329929 | orchestrator | 2026-03-08 01:03:57 | INFO  | Task f2b105d4-e6dc-4439-a5a6-c9e689c50315 is in state STARTED 2026-03-08 01:03:57.330636 | orchestrator | 2026-03-08 01:03:57 | INFO  | Task e2a04a5f-900a-439c-886a-67b04fff7b15 is in state SUCCESS 2026-03-08 01:03:57.332133 | orchestrator | 2026-03-08 01:03:57.332190 | orchestrator | 2026-03-08 01:03:57.332204 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-08 01:03:57.332218 | orchestrator | 2026-03-08 01:03:57.332229 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-08 01:03:57.332241 | orchestrator | Sunday 08 March 2026 01:03:18 +0000 (0:00:00.185) 0:00:00.185 ********** 2026-03-08 01:03:57.332252 | orchestrator | ok: [testbed-node-0] 2026-03-08 01:03:57.332337 | orchestrator | ok: [testbed-node-1] 2026-03-08 01:03:57.332350 | orchestrator | ok: [testbed-node-2] 2026-03-08 01:03:57.332361 | orchestrator | 2026-03-08 01:03:57.332373 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-08 01:03:57.332385 | orchestrator | Sunday 08 March 2026 01:03:19 +0000 (0:00:00.308) 0:00:00.494 ********** 2026-03-08 01:03:57.332397 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-03-08 01:03:57.332410 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-03-08 01:03:57.332421 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-03-08 01:03:57.332433 | orchestrator | 2026-03-08 01:03:57.332445 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2026-03-08 01:03:57.332457 | orchestrator | 2026-03-08 01:03:57.332509 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2026-03-08 01:03:57.332522 | orchestrator | Sunday 08 March 2026 01:03:19 +0000 (0:00:00.761) 0:00:01.255 ********** 2026-03-08 01:03:57.332533 | orchestrator | ok: [testbed-node-1] 2026-03-08 01:03:57.332545 | orchestrator | ok: [testbed-node-2] 2026-03-08 01:03:57.332556 | orchestrator | ok: [testbed-node-0] 2026-03-08 01:03:57.332568 | orchestrator | 2026-03-08 01:03:57.332579 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 01:03:57.332592 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 01:03:57.332608 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 01:03:57.332620 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 01:03:57.332631 | orchestrator | 2026-03-08 01:03:57.332642 | orchestrator | 2026-03-08 01:03:57.332811 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 01:03:57.332916 | orchestrator | Sunday 08 March 2026 01:03:20 +0000 (0:00:00.775) 0:00:02.030 ********** 2026-03-08 01:03:57.332929 | orchestrator | =============================================================================== 2026-03-08 01:03:57.332941 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.78s 2026-03-08 01:03:57.332953 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.76s 2026-03-08 01:03:57.332965 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.31s 2026-03-08 01:03:57.332976 | orchestrator | 2026-03-08 01:03:57.332987 | orchestrator | 2026-03-08 01:03:57.332998 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-08 01:03:57.333009 | orchestrator | 2026-03-08 01:03:57.333021 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-08 01:03:57.333028 | orchestrator | Sunday 08 March 2026 00:59:43 +0000 (0:00:00.504) 0:00:00.504 ********** 2026-03-08 01:03:57.333035 | orchestrator | ok: [testbed-node-0] 2026-03-08 01:03:57.333042 | orchestrator | ok: [testbed-node-1] 2026-03-08 01:03:57.333048 | orchestrator | ok: [testbed-node-2] 2026-03-08 01:03:57.333055 | orchestrator | ok: [testbed-node-3] 2026-03-08 01:03:57.333062 | orchestrator | ok: [testbed-node-4] 2026-03-08 01:03:57.333068 | orchestrator | ok: [testbed-node-5] 2026-03-08 01:03:57.333075 | orchestrator | 2026-03-08 01:03:57.333082 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-08 01:03:57.333088 | orchestrator | Sunday 08 March 2026 00:59:44 +0000 (0:00:00.959) 0:00:01.464 ********** 2026-03-08 01:03:57.333095 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-03-08 01:03:57.333102 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-03-08 01:03:57.333109 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-03-08 01:03:57.333116 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-03-08 01:03:57.333122 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-03-08 01:03:57.333129 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-03-08 01:03:57.333135 | orchestrator | 2026-03-08 01:03:57.333142 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-03-08 01:03:57.333148 | orchestrator | 2026-03-08 01:03:57.333155 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-08 01:03:57.333162 | orchestrator | Sunday 08 March 2026 00:59:45 +0000 (0:00:00.828) 0:00:02.292 ********** 2026-03-08 01:03:57.333168 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 01:03:57.333177 | orchestrator | 2026-03-08 01:03:57.333183 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-03-08 01:03:57.333190 | orchestrator | Sunday 08 March 2026 00:59:46 +0000 (0:00:00.989) 0:00:03.282 ********** 2026-03-08 01:03:57.333209 | orchestrator | ok: [testbed-node-1] 2026-03-08 01:03:57.333234 | orchestrator | ok: [testbed-node-0] 2026-03-08 01:03:57.333241 | orchestrator | ok: [testbed-node-2] 2026-03-08 01:03:57.333248 | orchestrator | ok: [testbed-node-3] 2026-03-08 01:03:57.333254 | orchestrator | ok: [testbed-node-4] 2026-03-08 01:03:57.333261 | orchestrator | ok: [testbed-node-5] 2026-03-08 01:03:57.333267 | orchestrator | 2026-03-08 01:03:57.333274 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-03-08 01:03:57.333281 | orchestrator | Sunday 08 March 2026 00:59:47 +0000 (0:00:01.179) 0:00:04.461 ********** 2026-03-08 01:03:57.333287 | orchestrator | ok: [testbed-node-0] 2026-03-08 01:03:57.333294 | orchestrator | ok: [testbed-node-2] 2026-03-08 01:03:57.333300 | orchestrator | ok: [testbed-node-1] 2026-03-08 01:03:57.333307 | orchestrator | ok: [testbed-node-3] 2026-03-08 01:03:57.333313 | orchestrator | ok: [testbed-node-4] 2026-03-08 01:03:57.333336 | orchestrator | ok: [testbed-node-5] 2026-03-08 01:03:57.333343 | orchestrator | 2026-03-08 01:03:57.333350 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-03-08 01:03:57.333357 | orchestrator | Sunday 08 March 2026 00:59:48 +0000 (0:00:01.063) 0:00:05.524 ********** 2026-03-08 01:03:57.333364 | orchestrator | ok: [testbed-node-0] => { 2026-03-08 01:03:57.333372 | orchestrator |  "changed": false, 2026-03-08 01:03:57.333378 | orchestrator |  "msg": "All assertions passed" 2026-03-08 01:03:57.333385 | orchestrator | } 2026-03-08 01:03:57.333392 | orchestrator | ok: [testbed-node-1] => { 2026-03-08 01:03:57.333399 | orchestrator |  "changed": false, 2026-03-08 01:03:57.333406 | orchestrator |  "msg": "All assertions passed" 2026-03-08 01:03:57.333412 | orchestrator | } 2026-03-08 01:03:57.333419 | orchestrator | ok: [testbed-node-2] => { 2026-03-08 01:03:57.333425 | orchestrator |  "changed": false, 2026-03-08 01:03:57.333432 | orchestrator |  "msg": "All assertions passed" 2026-03-08 01:03:57.333438 | orchestrator | } 2026-03-08 01:03:57.333445 | orchestrator | ok: [testbed-node-3] => { 2026-03-08 01:03:57.333451 | orchestrator |  "changed": false, 2026-03-08 01:03:57.333458 | orchestrator |  "msg": "All assertions passed" 2026-03-08 01:03:57.333465 | orchestrator | } 2026-03-08 01:03:57.333471 | orchestrator | ok: [testbed-node-4] => { 2026-03-08 01:03:57.333478 | orchestrator |  "changed": false, 2026-03-08 01:03:57.333484 | orchestrator |  "msg": "All assertions passed" 2026-03-08 01:03:57.333491 | orchestrator | } 2026-03-08 01:03:57.333497 | orchestrator | ok: [testbed-node-5] => { 2026-03-08 01:03:57.333504 | orchestrator |  "changed": false, 2026-03-08 01:03:57.333510 | orchestrator |  "msg": "All assertions passed" 2026-03-08 01:03:57.333517 | orchestrator | } 2026-03-08 01:03:57.333523 | orchestrator | 2026-03-08 01:03:57.333595 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-03-08 01:03:57.333602 | orchestrator | Sunday 08 March 2026 00:59:49 +0000 (0:00:00.711) 0:00:06.236 ********** 2026-03-08 01:03:57.333609 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:03:57.333616 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:03:57.333622 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:03:57.333629 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:03:57.333635 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:03:57.333642 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:03:57.333648 | orchestrator | 2026-03-08 01:03:57.333655 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2026-03-08 01:03:57.333662 | orchestrator | Sunday 08 March 2026 00:59:49 +0000 (0:00:00.514) 0:00:06.750 ********** 2026-03-08 01:03:57.333669 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-03-08 01:03:57.333675 | orchestrator | 2026-03-08 01:03:57.333682 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2026-03-08 01:03:57.333689 | orchestrator | Sunday 08 March 2026 00:59:53 +0000 (0:00:03.619) 0:00:10.369 ********** 2026-03-08 01:03:57.333695 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-03-08 01:03:57.333712 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-03-08 01:03:57.333724 | orchestrator | 2026-03-08 01:03:57.333735 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-03-08 01:03:57.333747 | orchestrator | Sunday 08 March 2026 01:00:00 +0000 (0:00:07.474) 0:00:17.844 ********** 2026-03-08 01:03:57.333757 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-08 01:03:57.333780 | orchestrator | 2026-03-08 01:03:57.333792 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-03-08 01:03:57.333803 | orchestrator | Sunday 08 March 2026 01:00:04 +0000 (0:00:03.806) 0:00:21.651 ********** 2026-03-08 01:03:57.333814 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-03-08 01:03:57.333847 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-08 01:03:57.333855 | orchestrator | 2026-03-08 01:03:57.333861 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-03-08 01:03:57.333868 | orchestrator | Sunday 08 March 2026 01:00:08 +0000 (0:00:04.048) 0:00:25.699 ********** 2026-03-08 01:03:57.333875 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-08 01:03:57.333881 | orchestrator | 2026-03-08 01:03:57.333888 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2026-03-08 01:03:57.333894 | orchestrator | Sunday 08 March 2026 01:00:12 +0000 (0:00:03.658) 0:00:29.357 ********** 2026-03-08 01:03:57.333901 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-03-08 01:03:57.333907 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-03-08 01:03:57.333914 | orchestrator | 2026-03-08 01:03:57.333920 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-08 01:03:57.333927 | orchestrator | Sunday 08 March 2026 01:00:20 +0000 (0:00:08.495) 0:00:37.853 ********** 2026-03-08 01:03:57.333933 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:03:57.333940 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:03:57.333946 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:03:57.333953 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:03:57.333960 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:03:57.333966 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:03:57.333973 | orchestrator | 2026-03-08 01:03:57.333979 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-03-08 01:03:57.333986 | orchestrator | Sunday 08 March 2026 01:00:21 +0000 (0:00:00.655) 0:00:38.508 ********** 2026-03-08 01:03:57.333993 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:03:57.334005 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:03:57.334011 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:03:57.334070 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:03:57.334077 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:03:57.334084 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:03:57.334091 | orchestrator | 2026-03-08 01:03:57.334097 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-03-08 01:03:57.334104 | orchestrator | Sunday 08 March 2026 01:00:23 +0000 (0:00:01.949) 0:00:40.458 ********** 2026-03-08 01:03:57.334111 | orchestrator | ok: [testbed-node-1] 2026-03-08 01:03:57.334118 | orchestrator | ok: [testbed-node-2] 2026-03-08 01:03:57.334124 | orchestrator | ok: [testbed-node-0] 2026-03-08 01:03:57.334131 | orchestrator | ok: [testbed-node-4] 2026-03-08 01:03:57.334137 | orchestrator | ok: [testbed-node-5] 2026-03-08 01:03:57.334162 | orchestrator | ok: [testbed-node-3] 2026-03-08 01:03:57.334170 | orchestrator | 2026-03-08 01:03:57.334177 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-03-08 01:03:57.334183 | orchestrator | Sunday 08 March 2026 01:00:25 +0000 (0:00:01.938) 0:00:42.396 ********** 2026-03-08 01:03:57.334190 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:03:57.334196 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:03:57.334203 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:03:57.334218 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:03:57.334225 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:03:57.334231 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:03:57.334238 | orchestrator | 2026-03-08 01:03:57.334244 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-03-08 01:03:57.334251 | orchestrator | Sunday 08 March 2026 01:00:27 +0000 (0:00:02.090) 0:00:44.486 ********** 2026-03-08 01:03:57.334263 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-08 01:03:57.334276 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-08 01:03:57.334284 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-08 01:03:57.334297 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-08 01:03:57.334313 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-08 01:03:57.334327 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-08 01:03:57.334334 | orchestrator | 2026-03-08 01:03:57.334341 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-03-08 01:03:57.334348 | orchestrator | Sunday 08 March 2026 01:00:30 +0000 (0:00:03.038) 0:00:47.525 ********** 2026-03-08 01:03:57.334355 | orchestrator | [WARNING]: Skipped 2026-03-08 01:03:57.334362 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-03-08 01:03:57.334370 | orchestrator | due to this access issue: 2026-03-08 01:03:57.334377 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-03-08 01:03:57.334383 | orchestrator | a directory 2026-03-08 01:03:57.334390 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-08 01:03:57.334397 | orchestrator | 2026-03-08 01:03:57.334403 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-08 01:03:57.334410 | orchestrator | Sunday 08 March 2026 01:00:31 +0000 (0:00:00.852) 0:00:48.377 ********** 2026-03-08 01:03:57.334433 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 01:03:57.334442 | orchestrator | 2026-03-08 01:03:57.334449 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-03-08 01:03:57.334456 | orchestrator | Sunday 08 March 2026 01:00:32 +0000 (0:00:01.120) 0:00:49.498 ********** 2026-03-08 01:03:57.334463 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-08 01:03:57.334474 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-08 01:03:57.334493 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-08 01:03:57.334501 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-08 01:03:57.334508 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-08 01:03:57.334515 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-08 01:03:57.334522 | orchestrator | 2026-03-08 01:03:57.334529 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-03-08 01:03:57.334536 | orchestrator | Sunday 08 March 2026 01:00:36 +0000 (0:00:04.093) 0:00:53.592 ********** 2026-03-08 01:03:57.334589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-08 01:03:57.334596 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:03:57.334604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-08 01:03:57.334611 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:03:57.334619 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-08 01:03:57.334626 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:03:57.334634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-08 01:03:57.334641 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-08 01:03:57.334657 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:03:57.334664 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:03:57.334679 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-08 01:03:57.334686 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:03:57.334693 | orchestrator | 2026-03-08 01:03:57.334700 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-03-08 01:03:57.334710 | orchestrator | Sunday 08 March 2026 01:00:39 +0000 (0:00:03.606) 0:00:57.198 ********** 2026-03-08 01:03:57.334722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-08 01:03:57.334733 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:03:57.334745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-08 01:03:57.334757 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:03:57.334811 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-08 01:03:57.334853 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:03:57.334866 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-08 01:03:57.334873 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:03:57.334898 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-08 01:03:57.334906 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:03:57.334912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-08 01:03:57.334920 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:03:57.334926 | orchestrator | 2026-03-08 01:03:57.334933 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-03-08 01:03:57.334940 | orchestrator | Sunday 08 March 2026 01:00:44 +0000 (0:00:04.212) 0:01:01.410 ********** 2026-03-08 01:03:57.334946 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:03:57.334953 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:03:57.334960 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:03:57.334967 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:03:57.334973 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:03:57.334991 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:03:57.335011 | orchestrator | 2026-03-08 01:03:57.335018 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-03-08 01:03:57.335024 | orchestrator | Sunday 08 March 2026 01:00:46 +0000 (0:00:02.560) 0:01:03.971 ********** 2026-03-08 01:03:57.335031 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:03:57.335038 | orchestrator | 2026-03-08 01:03:57.335045 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-03-08 01:03:57.335066 | orchestrator | Sunday 08 March 2026 01:00:46 +0000 (0:00:00.107) 0:01:04.078 ********** 2026-03-08 01:03:57.335073 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:03:57.335079 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:03:57.335086 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:03:57.335092 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:03:57.335099 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:03:57.335105 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:03:57.335112 | orchestrator | 2026-03-08 01:03:57.335118 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-03-08 01:03:57.335125 | orchestrator | Sunday 08 March 2026 01:00:47 +0000 (0:00:00.703) 0:01:04.781 ********** 2026-03-08 01:03:57.335132 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-08 01:03:57.335139 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:03:57.335157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-08 01:03:57.335165 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:03:57.335172 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-08 01:03:57.335179 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:03:57.335186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-08 01:03:57.335198 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:03:57.335205 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-08 01:03:57.335212 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:03:57.335224 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-08 01:03:57.335231 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:03:57.335238 | orchestrator | 2026-03-08 01:03:57.335244 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-03-08 01:03:57.335251 | orchestrator | Sunday 08 March 2026 01:00:50 +0000 (0:00:03.248) 0:01:08.030 ********** 2026-03-08 01:03:57.335264 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-08 01:03:57.335271 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-08 01:03:57.335285 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-08 01:03:57.335292 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-08 01:03:57.335303 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-08 01:03:57.335316 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-08 01:03:57.335324 | orchestrator | 2026-03-08 01:03:57.335331 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-03-08 01:03:57.335337 | orchestrator | Sunday 08 March 2026 01:00:54 +0000 (0:00:04.106) 0:01:12.136 ********** 2026-03-08 01:03:57.335344 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-08 01:03:57.335356 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-08 01:03:57.335363 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-08 01:03:57.335380 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-08 01:03:57.335388 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-08 01:03:57.335395 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-08 01:03:57.335408 | orchestrator | 2026-03-08 01:03:57.335415 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-03-08 01:03:57.335422 | orchestrator | Sunday 08 March 2026 01:01:01 +0000 (0:00:06.936) 0:01:19.073 ********** 2026-03-08 01:03:57.335429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-08 01:03:57.335436 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:03:57.335443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-08 01:03:57.335450 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:03:57.335466 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-08 01:03:57.335474 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:03:57.335481 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-08 01:03:57.335494 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:03:57.335501 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-08 01:03:57.335508 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:03:57.335515 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-08 01:03:57.335523 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:03:57.335529 | orchestrator | 2026-03-08 01:03:57.335536 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-03-08 01:03:57.335543 | orchestrator | Sunday 08 March 2026 01:01:04 +0000 (0:00:02.689) 0:01:21.762 ********** 2026-03-08 01:03:57.335549 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:03:57.335556 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:03:57.335562 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:03:57.335569 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:03:57.335575 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:03:57.335582 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:03:57.335589 | orchestrator | 2026-03-08 01:03:57.335596 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-03-08 01:03:57.335603 | orchestrator | Sunday 08 March 2026 01:01:07 +0000 (0:00:03.148) 0:01:24.911 ********** 2026-03-08 01:03:57.335614 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-08 01:03:57.335621 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:03:57.335633 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-08 01:03:57.335646 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:03:57.335653 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-08 01:03:57.335660 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:03:57.335667 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-08 01:03:57.335674 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-08 01:03:57.335692 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-08 01:03:57.335709 | orchestrator | 2026-03-08 01:03:57.335722 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2026-03-08 01:03:57.335734 | orchestrator | Sunday 08 March 2026 01:01:11 +0000 (0:00:03.892) 0:01:28.803 ********** 2026-03-08 01:03:57.335746 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:03:57.335756 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:03:57.335767 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:03:57.335779 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:03:57.335790 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:03:57.335797 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:03:57.335804 | orchestrator | 2026-03-08 01:03:57.335811 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-03-08 01:03:57.335817 | orchestrator | Sunday 08 March 2026 01:01:13 +0000 (0:00:02.378) 0:01:31.182 ********** 2026-03-08 01:03:57.335878 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:03:57.335887 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:03:57.335894 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:03:57.335901 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:03:57.335907 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:03:57.335914 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:03:57.335921 | orchestrator | 2026-03-08 01:03:57.335928 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-03-08 01:03:57.335935 | orchestrator | Sunday 08 March 2026 01:01:16 +0000 (0:00:02.611) 0:01:33.794 ********** 2026-03-08 01:03:57.335943 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:03:57.335950 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:03:57.335957 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:03:57.335965 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:03:57.335972 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:03:57.335979 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:03:57.335987 | orchestrator | 2026-03-08 01:03:57.335994 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-03-08 01:03:57.336001 | orchestrator | Sunday 08 March 2026 01:01:18 +0000 (0:00:01.937) 0:01:35.731 ********** 2026-03-08 01:03:57.336008 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:03:57.336015 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:03:57.336023 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:03:57.336030 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:03:57.336037 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:03:57.336045 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:03:57.336052 | orchestrator | 2026-03-08 01:03:57.336059 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-03-08 01:03:57.336067 | orchestrator | Sunday 08 March 2026 01:01:20 +0000 (0:00:01.951) 0:01:37.683 ********** 2026-03-08 01:03:57.336074 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:03:57.336082 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:03:57.336089 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:03:57.336097 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:03:57.336105 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:03:57.336112 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:03:57.336119 | orchestrator | 2026-03-08 01:03:57.336126 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-03-08 01:03:57.336134 | orchestrator | Sunday 08 March 2026 01:01:23 +0000 (0:00:02.549) 0:01:40.232 ********** 2026-03-08 01:03:57.336141 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:03:57.336148 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:03:57.336155 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:03:57.336163 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:03:57.336170 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:03:57.336188 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:03:57.336196 | orchestrator | 2026-03-08 01:03:57.336204 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-03-08 01:03:57.336211 | orchestrator | Sunday 08 March 2026 01:01:25 +0000 (0:00:02.510) 0:01:42.743 ********** 2026-03-08 01:03:57.336218 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-08 01:03:57.336226 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:03:57.336233 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-08 01:03:57.336241 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:03:57.336248 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-08 01:03:57.336255 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:03:57.336262 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-08 01:03:57.336269 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:03:57.336277 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-08 01:03:57.336284 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:03:57.336291 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-08 01:03:57.336299 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:03:57.336306 | orchestrator | 2026-03-08 01:03:57.336314 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-03-08 01:03:57.336327 | orchestrator | Sunday 08 March 2026 01:01:27 +0000 (0:00:02.049) 0:01:44.792 ********** 2026-03-08 01:03:57.336348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-08 01:03:57.336356 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:03:57.336364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-08 01:03:57.336373 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:03:57.336380 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-08 01:03:57.336395 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:03:57.336404 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-08 01:03:57.336411 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:03:57.336423 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-08 01:03:57.336431 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:03:57.336445 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-08 01:03:57.336453 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:03:57.336460 | orchestrator | 2026-03-08 01:03:57.336468 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-03-08 01:03:57.336475 | orchestrator | Sunday 08 March 2026 01:01:29 +0000 (0:00:02.288) 0:01:47.080 ********** 2026-03-08 01:03:57.336483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-08 01:03:57.336496 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:03:57.336504 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-08 01:03:57.336512 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:03:57.336519 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-08 01:03:57.336527 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:03:57.336543 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-08 01:03:57.336551 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:03:57.336558 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-08 01:03:57.336567 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:03:57.336574 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-08 01:03:57.336589 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:03:57.336596 | orchestrator | 2026-03-08 01:03:57.336603 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-03-08 01:03:57.336611 | orchestrator | Sunday 08 March 2026 01:01:32 +0000 (0:00:02.396) 0:01:49.477 ********** 2026-03-08 01:03:57.336618 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:03:57.336625 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:03:57.336632 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:03:57.336640 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:03:57.336647 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:03:57.336655 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:03:57.336662 | orchestrator | 2026-03-08 01:03:57.336669 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-03-08 01:03:57.336676 | orchestrator | Sunday 08 March 2026 01:01:34 +0000 (0:00:02.546) 0:01:52.023 ********** 2026-03-08 01:03:57.336684 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:03:57.336691 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:03:57.336698 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:03:57.336706 | orchestrator | changed: [testbed-node-3] 2026-03-08 01:03:57.336719 | orchestrator | changed: [testbed-node-5] 2026-03-08 01:03:57.336731 | orchestrator | changed: [testbed-node-4] 2026-03-08 01:03:57.336743 | orchestrator | 2026-03-08 01:03:57.336754 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-03-08 01:03:57.336766 | orchestrator | Sunday 08 March 2026 01:01:38 +0000 (0:00:03.779) 0:01:55.803 ********** 2026-03-08 01:03:57.336778 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:03:57.336789 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:03:57.336800 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:03:57.336813 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:03:57.336842 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:03:57.336855 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:03:57.336867 | orchestrator | 2026-03-08 01:03:57.336880 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-03-08 01:03:57.336891 | orchestrator | Sunday 08 March 2026 01:01:40 +0000 (0:00:02.023) 0:01:57.826 ********** 2026-03-08 01:03:57.336898 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:03:57.336906 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:03:57.336913 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:03:57.336920 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:03:57.336927 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:03:57.336934 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:03:57.336942 | orchestrator | 2026-03-08 01:03:57.336949 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-03-08 01:03:57.336956 | orchestrator | Sunday 08 March 2026 01:01:42 +0000 (0:00:02.301) 0:02:00.128 ********** 2026-03-08 01:03:57.336963 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:03:57.336971 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:03:57.336983 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:03:57.336991 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:03:57.336998 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:03:57.337011 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:03:57.337019 | orchestrator | 2026-03-08 01:03:57.337026 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-03-08 01:03:57.337041 | orchestrator | Sunday 08 March 2026 01:01:46 +0000 (0:00:03.529) 0:02:03.658 ********** 2026-03-08 01:03:57.337048 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:03:57.337055 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:03:57.337063 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:03:57.337070 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:03:57.337077 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:03:57.337091 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:03:57.337099 | orchestrator | 2026-03-08 01:03:57.337106 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-03-08 01:03:57.337114 | orchestrator | Sunday 08 March 2026 01:01:48 +0000 (0:00:02.114) 0:02:05.772 ********** 2026-03-08 01:03:57.337121 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:03:57.337129 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:03:57.337136 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:03:57.337144 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:03:57.337151 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:03:57.337158 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:03:57.337165 | orchestrator | 2026-03-08 01:03:57.337172 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-03-08 01:03:57.337180 | orchestrator | Sunday 08 March 2026 01:01:50 +0000 (0:00:01.738) 0:02:07.511 ********** 2026-03-08 01:03:57.337187 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:03:57.337194 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:03:57.337201 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:03:57.337208 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:03:57.337215 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:03:57.337222 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:03:57.337230 | orchestrator | 2026-03-08 01:03:57.337237 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-03-08 01:03:57.337244 | orchestrator | Sunday 08 March 2026 01:01:52 +0000 (0:00:01.879) 0:02:09.391 ********** 2026-03-08 01:03:57.337252 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:03:57.337259 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:03:57.337266 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:03:57.337273 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:03:57.337280 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:03:57.337288 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:03:57.337295 | orchestrator | 2026-03-08 01:03:57.337302 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-03-08 01:03:57.337309 | orchestrator | Sunday 08 March 2026 01:01:53 +0000 (0:00:01.713) 0:02:11.104 ********** 2026-03-08 01:03:57.337317 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-08 01:03:57.337325 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:03:57.337332 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-08 01:03:57.337339 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:03:57.337346 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-08 01:03:57.337354 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:03:57.337361 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-08 01:03:57.337369 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:03:57.337376 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-08 01:03:57.337383 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:03:57.337390 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-08 01:03:57.337397 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:03:57.337405 | orchestrator | 2026-03-08 01:03:57.337412 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-03-08 01:03:57.337420 | orchestrator | Sunday 08 March 2026 01:01:56 +0000 (0:00:02.734) 0:02:13.839 ********** 2026-03-08 01:03:57.337436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-08 01:03:57.337444 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:03:57.337456 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-08 01:03:57.337470 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:03:57.337477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-08 01:03:57.337485 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:03:57.337492 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-08 01:03:57.337500 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:03:57.337507 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-08 01:03:57.337521 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:03:57.337529 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-08 01:03:57.337536 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:03:57.337544 | orchestrator | 2026-03-08 01:03:57.337551 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2026-03-08 01:03:57.337563 | orchestrator | Sunday 08 March 2026 01:01:59 +0000 (0:00:02.760) 0:02:16.599 ********** 2026-03-08 01:03:57.337577 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-08 01:03:57.337585 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-08 01:03:57.337593 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-08 01:03:57.337613 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-08 01:03:57.337627 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-08 01:03:57.337642 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/v2026-03-08 01:03:57 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:03:57.337651 | orchestrator | ar/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-08 01:03:57.337661 | orchestrator | 2026-03-08 01:03:57.337668 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-08 01:03:57.337676 | orchestrator | Sunday 08 March 2026 01:02:02 +0000 (0:00:02.813) 0:02:19.413 ********** 2026-03-08 01:03:57.337683 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:03:57.337691 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:03:57.337698 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:03:57.337706 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:03:57.337718 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:03:57.337730 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:03:57.337742 | orchestrator | 2026-03-08 01:03:57.337754 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-03-08 01:03:57.337766 | orchestrator | Sunday 08 March 2026 01:02:02 +0000 (0:00:00.588) 0:02:20.001 ********** 2026-03-08 01:03:57.337778 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:03:57.337791 | orchestrator | 2026-03-08 01:03:57.337803 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-03-08 01:03:57.337815 | orchestrator | Sunday 08 March 2026 01:02:05 +0000 (0:00:02.379) 0:02:22.381 ********** 2026-03-08 01:03:57.337846 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:03:57.337875 | orchestrator | 2026-03-08 01:03:57.337889 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-03-08 01:03:57.337897 | orchestrator | Sunday 08 March 2026 01:02:07 +0000 (0:00:02.711) 0:02:25.092 ********** 2026-03-08 01:03:57.337904 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:03:57.337911 | orchestrator | 2026-03-08 01:03:57.337918 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-08 01:03:57.337925 | orchestrator | Sunday 08 March 2026 01:02:50 +0000 (0:00:42.919) 0:03:08.012 ********** 2026-03-08 01:03:57.337932 | orchestrator | 2026-03-08 01:03:57.337940 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-08 01:03:57.337947 | orchestrator | Sunday 08 March 2026 01:02:50 +0000 (0:00:00.067) 0:03:08.079 ********** 2026-03-08 01:03:57.337955 | orchestrator | 2026-03-08 01:03:57.337962 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-08 01:03:57.337969 | orchestrator | Sunday 08 March 2026 01:02:51 +0000 (0:00:00.253) 0:03:08.333 ********** 2026-03-08 01:03:57.337976 | orchestrator | 2026-03-08 01:03:57.337983 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-08 01:03:57.337990 | orchestrator | Sunday 08 March 2026 01:02:51 +0000 (0:00:00.065) 0:03:08.399 ********** 2026-03-08 01:03:57.337997 | orchestrator | 2026-03-08 01:03:57.338004 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-08 01:03:57.338012 | orchestrator | Sunday 08 March 2026 01:02:51 +0000 (0:00:00.068) 0:03:08.468 ********** 2026-03-08 01:03:57.338091 | orchestrator | 2026-03-08 01:03:57.338101 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-08 01:03:57.338110 | orchestrator | Sunday 08 March 2026 01:02:51 +0000 (0:00:00.093) 0:03:08.562 ********** 2026-03-08 01:03:57.338118 | orchestrator | 2026-03-08 01:03:57.338127 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-03-08 01:03:57.338135 | orchestrator | Sunday 08 March 2026 01:02:51 +0000 (0:00:00.071) 0:03:08.633 ********** 2026-03-08 01:03:57.338144 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:03:57.338153 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:03:57.338162 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:03:57.338170 | orchestrator | 2026-03-08 01:03:57.338179 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-03-08 01:03:57.338188 | orchestrator | Sunday 08 March 2026 01:03:10 +0000 (0:00:18.979) 0:03:27.613 ********** 2026-03-08 01:03:57.338196 | orchestrator | changed: [testbed-node-3] 2026-03-08 01:03:57.338205 | orchestrator | changed: [testbed-node-4] 2026-03-08 01:03:57.338214 | orchestrator | changed: [testbed-node-5] 2026-03-08 01:03:57.338222 | orchestrator | 2026-03-08 01:03:57.338231 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 01:03:57.338241 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-08 01:03:57.338251 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-03-08 01:03:57.338259 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-03-08 01:03:57.338275 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-08 01:03:57.338284 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-08 01:03:57.338292 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-08 01:03:57.338301 | orchestrator | 2026-03-08 01:03:57.338310 | orchestrator | 2026-03-08 01:03:57.338336 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 01:03:57.338345 | orchestrator | Sunday 08 March 2026 01:03:55 +0000 (0:00:45.129) 0:04:12.742 ********** 2026-03-08 01:03:57.338354 | orchestrator | =============================================================================== 2026-03-08 01:03:57.338363 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 45.13s 2026-03-08 01:03:57.338372 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 42.92s 2026-03-08 01:03:57.338380 | orchestrator | neutron : Restart neutron-server container ----------------------------- 18.98s 2026-03-08 01:03:57.338389 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 8.50s 2026-03-08 01:03:57.338398 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 7.47s 2026-03-08 01:03:57.338407 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 6.94s 2026-03-08 01:03:57.338415 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 4.21s 2026-03-08 01:03:57.338424 | orchestrator | neutron : Copying over config.json files for services ------------------- 4.11s 2026-03-08 01:03:57.338433 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 4.09s 2026-03-08 01:03:57.338441 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 4.05s 2026-03-08 01:03:57.338450 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 3.89s 2026-03-08 01:03:57.338459 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.81s 2026-03-08 01:03:57.338467 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 3.78s 2026-03-08 01:03:57.338476 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.66s 2026-03-08 01:03:57.338484 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.62s 2026-03-08 01:03:57.338493 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS certificate --- 3.61s 2026-03-08 01:03:57.338502 | orchestrator | neutron : Copying over bgp_dragent.ini ---------------------------------- 3.53s 2026-03-08 01:03:57.338510 | orchestrator | neutron : Copying over existing policy file ----------------------------- 3.25s 2026-03-08 01:03:57.338519 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 3.15s 2026-03-08 01:03:57.338528 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 3.04s 2026-03-08 01:03:57.338537 | orchestrator | 2026-03-08 01:03:57 | INFO  | Task 4f67e4e2-bce5-4c6a-8806-a413db356c45 is in state STARTED 2026-03-08 01:03:57.338550 | orchestrator | 2026-03-08 01:03:57 | INFO  | Task 20eb2391-c27d-419d-9f33-cf74508326df is in state STARTED 2026-03-08 01:03:57.338559 | orchestrator | 2026-03-08 01:03:57 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:04:00.364998 | orchestrator | 2026-03-08 01:04:00 | INFO  | Task f2b105d4-e6dc-4439-a5a6-c9e689c50315 is in state STARTED 2026-03-08 01:04:00.365540 | orchestrator | 2026-03-08 01:04:00 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:04:00.365888 | orchestrator | 2026-03-08 01:04:00 | INFO  | Task 4f67e4e2-bce5-4c6a-8806-a413db356c45 is in state STARTED 2026-03-08 01:04:00.367089 | orchestrator | 2026-03-08 01:04:00 | INFO  | Task 20eb2391-c27d-419d-9f33-cf74508326df is in state STARTED 2026-03-08 01:04:00.367139 | orchestrator | 2026-03-08 01:04:00 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:04:03.394256 | orchestrator | 2026-03-08 01:04:03 | INFO  | Task f2b105d4-e6dc-4439-a5a6-c9e689c50315 is in state STARTED 2026-03-08 01:04:03.394634 | orchestrator | 2026-03-08 01:04:03 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:04:03.395461 | orchestrator | 2026-03-08 01:04:03 | INFO  | Task 4f67e4e2-bce5-4c6a-8806-a413db356c45 is in state STARTED 2026-03-08 01:04:03.396120 | orchestrator | 2026-03-08 01:04:03 | INFO  | Task 20eb2391-c27d-419d-9f33-cf74508326df is in state STARTED 2026-03-08 01:04:03.396156 | orchestrator | 2026-03-08 01:04:03 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:04:06.437213 | orchestrator | 2026-03-08 01:04:06 | INFO  | Task f2b105d4-e6dc-4439-a5a6-c9e689c50315 is in state STARTED 2026-03-08 01:04:06.439663 | orchestrator | 2026-03-08 01:04:06 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:04:06.439754 | orchestrator | 2026-03-08 01:04:06 | INFO  | Task 4f67e4e2-bce5-4c6a-8806-a413db356c45 is in state STARTED 2026-03-08 01:04:06.440783 | orchestrator | 2026-03-08 01:04:06 | INFO  | Task 20eb2391-c27d-419d-9f33-cf74508326df is in state STARTED 2026-03-08 01:04:06.440851 | orchestrator | 2026-03-08 01:04:06 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:04:09.490608 | orchestrator | 2026-03-08 01:04:09 | INFO  | Task f2b105d4-e6dc-4439-a5a6-c9e689c50315 is in state STARTED 2026-03-08 01:04:09.492980 | orchestrator | 2026-03-08 01:04:09 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:04:09.494127 | orchestrator | 2026-03-08 01:04:09 | INFO  | Task 4f67e4e2-bce5-4c6a-8806-a413db356c45 is in state STARTED 2026-03-08 01:04:09.496778 | orchestrator | 2026-03-08 01:04:09 | INFO  | Task 20eb2391-c27d-419d-9f33-cf74508326df is in state STARTED 2026-03-08 01:04:09.496862 | orchestrator | 2026-03-08 01:04:09 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:04:12.540093 | orchestrator | 2026-03-08 01:04:12 | INFO  | Task f2b105d4-e6dc-4439-a5a6-c9e689c50315 is in state STARTED 2026-03-08 01:04:12.541232 | orchestrator | 2026-03-08 01:04:12 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:04:12.542352 | orchestrator | 2026-03-08 01:04:12 | INFO  | Task 4f67e4e2-bce5-4c6a-8806-a413db356c45 is in state STARTED 2026-03-08 01:04:12.542765 | orchestrator | 2026-03-08 01:04:12 | INFO  | Task 20eb2391-c27d-419d-9f33-cf74508326df is in state STARTED 2026-03-08 01:04:12.542835 | orchestrator | 2026-03-08 01:04:12 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:04:15.648996 | orchestrator | 2026-03-08 01:04:15 | INFO  | Task f2b105d4-e6dc-4439-a5a6-c9e689c50315 is in state STARTED 2026-03-08 01:04:15.649078 | orchestrator | 2026-03-08 01:04:15 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:04:15.650916 | orchestrator | 2026-03-08 01:04:15 | INFO  | Task 4f67e4e2-bce5-4c6a-8806-a413db356c45 is in state STARTED 2026-03-08 01:04:15.652106 | orchestrator | 2026-03-08 01:04:15 | INFO  | Task 20eb2391-c27d-419d-9f33-cf74508326df is in state STARTED 2026-03-08 01:04:15.652142 | orchestrator | 2026-03-08 01:04:15 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:04:18.697624 | orchestrator | 2026-03-08 01:04:18 | INFO  | Task f2b105d4-e6dc-4439-a5a6-c9e689c50315 is in state STARTED 2026-03-08 01:04:18.698825 | orchestrator | 2026-03-08 01:04:18 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:04:18.700702 | orchestrator | 2026-03-08 01:04:18 | INFO  | Task 4f67e4e2-bce5-4c6a-8806-a413db356c45 is in state STARTED 2026-03-08 01:04:18.702859 | orchestrator | 2026-03-08 01:04:18 | INFO  | Task 20eb2391-c27d-419d-9f33-cf74508326df is in state STARTED 2026-03-08 01:04:18.703130 | orchestrator | 2026-03-08 01:04:18 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:04:21.778976 | orchestrator | 2026-03-08 01:04:21 | INFO  | Task f2b105d4-e6dc-4439-a5a6-c9e689c50315 is in state STARTED 2026-03-08 01:04:21.780185 | orchestrator | 2026-03-08 01:04:21 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:04:21.781948 | orchestrator | 2026-03-08 01:04:21 | INFO  | Task 4f67e4e2-bce5-4c6a-8806-a413db356c45 is in state STARTED 2026-03-08 01:04:21.783450 | orchestrator | 2026-03-08 01:04:21 | INFO  | Task 20eb2391-c27d-419d-9f33-cf74508326df is in state STARTED 2026-03-08 01:04:21.783714 | orchestrator | 2026-03-08 01:04:21 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:04:24.826332 | orchestrator | 2026-03-08 01:04:24 | INFO  | Task f2b105d4-e6dc-4439-a5a6-c9e689c50315 is in state STARTED 2026-03-08 01:04:24.827100 | orchestrator | 2026-03-08 01:04:24 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:04:24.828329 | orchestrator | 2026-03-08 01:04:24 | INFO  | Task 4f67e4e2-bce5-4c6a-8806-a413db356c45 is in state STARTED 2026-03-08 01:04:24.829051 | orchestrator | 2026-03-08 01:04:24 | INFO  | Task 20eb2391-c27d-419d-9f33-cf74508326df is in state STARTED 2026-03-08 01:04:24.829245 | orchestrator | 2026-03-08 01:04:24 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:04:27.875466 | orchestrator | 2026-03-08 01:04:27 | INFO  | Task f2b105d4-e6dc-4439-a5a6-c9e689c50315 is in state STARTED 2026-03-08 01:04:27.875921 | orchestrator | 2026-03-08 01:04:27 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:04:27.877456 | orchestrator | 2026-03-08 01:04:27 | INFO  | Task 4f67e4e2-bce5-4c6a-8806-a413db356c45 is in state STARTED 2026-03-08 01:04:27.879591 | orchestrator | 2026-03-08 01:04:27 | INFO  | Task 20eb2391-c27d-419d-9f33-cf74508326df is in state STARTED 2026-03-08 01:04:27.879646 | orchestrator | 2026-03-08 01:04:27 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:04:30.913890 | orchestrator | 2026-03-08 01:04:30 | INFO  | Task f2b105d4-e6dc-4439-a5a6-c9e689c50315 is in state STARTED 2026-03-08 01:04:30.914165 | orchestrator | 2026-03-08 01:04:30 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:04:30.915309 | orchestrator | 2026-03-08 01:04:30 | INFO  | Task 4f67e4e2-bce5-4c6a-8806-a413db356c45 is in state STARTED 2026-03-08 01:04:30.915877 | orchestrator | 2026-03-08 01:04:30 | INFO  | Task 20eb2391-c27d-419d-9f33-cf74508326df is in state STARTED 2026-03-08 01:04:30.915914 | orchestrator | 2026-03-08 01:04:30 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:04:33.971395 | orchestrator | 2026-03-08 01:04:33 | INFO  | Task f2b105d4-e6dc-4439-a5a6-c9e689c50315 is in state STARTED 2026-03-08 01:04:33.973357 | orchestrator | 2026-03-08 01:04:33 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:04:33.977210 | orchestrator | 2026-03-08 01:04:33 | INFO  | Task 4f67e4e2-bce5-4c6a-8806-a413db356c45 is in state STARTED 2026-03-08 01:04:33.980542 | orchestrator | 2026-03-08 01:04:33 | INFO  | Task 20eb2391-c27d-419d-9f33-cf74508326df is in state STARTED 2026-03-08 01:04:33.980599 | orchestrator | 2026-03-08 01:04:33 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:04:37.049550 | orchestrator | 2026-03-08 01:04:37 | INFO  | Task f2b105d4-e6dc-4439-a5a6-c9e689c50315 is in state STARTED 2026-03-08 01:04:37.050943 | orchestrator | 2026-03-08 01:04:37 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:04:37.052402 | orchestrator | 2026-03-08 01:04:37 | INFO  | Task 4f67e4e2-bce5-4c6a-8806-a413db356c45 is in state SUCCESS 2026-03-08 01:04:37.053485 | orchestrator | 2026-03-08 01:04:37 | INFO  | Task 4597916f-c18d-4669-9a7f-2423ee4e283a is in state STARTED 2026-03-08 01:04:37.055009 | orchestrator | 2026-03-08 01:04:37 | INFO  | Task 20eb2391-c27d-419d-9f33-cf74508326df is in state STARTED 2026-03-08 01:04:37.055046 | orchestrator | 2026-03-08 01:04:37 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:04:40.100289 | orchestrator | 2026-03-08 01:04:40 | INFO  | Task f2b105d4-e6dc-4439-a5a6-c9e689c50315 is in state SUCCESS 2026-03-08 01:04:40.101222 | orchestrator | 2026-03-08 01:04:40.101267 | orchestrator | 2026-03-08 01:04:40.101276 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-08 01:04:40.101284 | orchestrator | 2026-03-08 01:04:40.101290 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-08 01:04:40.101298 | orchestrator | Sunday 08 March 2026 01:04:03 +0000 (0:00:00.218) 0:00:00.218 ********** 2026-03-08 01:04:40.101304 | orchestrator | ok: [testbed-manager] 2026-03-08 01:04:40.101311 | orchestrator | ok: [testbed-node-0] 2026-03-08 01:04:40.101318 | orchestrator | ok: [testbed-node-1] 2026-03-08 01:04:40.101324 | orchestrator | ok: [testbed-node-2] 2026-03-08 01:04:40.101330 | orchestrator | ok: [testbed-node-3] 2026-03-08 01:04:40.101337 | orchestrator | ok: [testbed-node-4] 2026-03-08 01:04:40.101343 | orchestrator | ok: [testbed-node-5] 2026-03-08 01:04:40.101350 | orchestrator | 2026-03-08 01:04:40.101356 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-08 01:04:40.101363 | orchestrator | Sunday 08 March 2026 01:04:03 +0000 (0:00:00.627) 0:00:00.845 ********** 2026-03-08 01:04:40.101370 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-03-08 01:04:40.101377 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-03-08 01:04:40.101383 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-03-08 01:04:40.101390 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-03-08 01:04:40.101396 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-03-08 01:04:40.101402 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-03-08 01:04:40.101409 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-03-08 01:04:40.101415 | orchestrator | 2026-03-08 01:04:40.101422 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-03-08 01:04:40.101428 | orchestrator | 2026-03-08 01:04:40.101435 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-03-08 01:04:40.101442 | orchestrator | Sunday 08 March 2026 01:04:04 +0000 (0:00:00.665) 0:00:01.511 ********** 2026-03-08 01:04:40.101449 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 01:04:40.101456 | orchestrator | 2026-03-08 01:04:40.101462 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2026-03-08 01:04:40.101468 | orchestrator | Sunday 08 March 2026 01:04:05 +0000 (0:00:01.298) 0:00:02.810 ********** 2026-03-08 01:04:40.101475 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2026-03-08 01:04:40.101481 | orchestrator | 2026-03-08 01:04:40.101488 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2026-03-08 01:04:40.101494 | orchestrator | Sunday 08 March 2026 01:04:09 +0000 (0:00:03.488) 0:00:06.298 ********** 2026-03-08 01:04:40.101510 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-03-08 01:04:40.101518 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-03-08 01:04:40.101539 | orchestrator | 2026-03-08 01:04:40.101546 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-03-08 01:04:40.101552 | orchestrator | Sunday 08 March 2026 01:04:15 +0000 (0:00:06.753) 0:00:13.052 ********** 2026-03-08 01:04:40.101559 | orchestrator | ok: [testbed-manager] => (item=service) 2026-03-08 01:04:40.101566 | orchestrator | 2026-03-08 01:04:40.101573 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-03-08 01:04:40.101578 | orchestrator | Sunday 08 March 2026 01:04:19 +0000 (0:00:03.511) 0:00:16.564 ********** 2026-03-08 01:04:40.101600 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2026-03-08 01:04:40.101608 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-08 01:04:40.101614 | orchestrator | 2026-03-08 01:04:40.101620 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-03-08 01:04:40.101626 | orchestrator | Sunday 08 March 2026 01:04:23 +0000 (0:00:03.871) 0:00:20.436 ********** 2026-03-08 01:04:40.101633 | orchestrator | ok: [testbed-manager] => (item=admin) 2026-03-08 01:04:40.101660 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2026-03-08 01:04:40.101688 | orchestrator | 2026-03-08 01:04:40.101695 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2026-03-08 01:04:40.101702 | orchestrator | Sunday 08 March 2026 01:04:29 +0000 (0:00:05.894) 0:00:26.331 ********** 2026-03-08 01:04:40.101709 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2026-03-08 01:04:40.101716 | orchestrator | 2026-03-08 01:04:40.101723 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 01:04:40.101729 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 01:04:40.101799 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 01:04:40.101806 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 01:04:40.101812 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 01:04:40.101819 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 01:04:40.101838 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 01:04:40.101845 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 01:04:40.101852 | orchestrator | 2026-03-08 01:04:40.101859 | orchestrator | 2026-03-08 01:04:40.101865 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 01:04:40.101872 | orchestrator | Sunday 08 March 2026 01:04:34 +0000 (0:00:05.024) 0:00:31.355 ********** 2026-03-08 01:04:40.101878 | orchestrator | =============================================================================== 2026-03-08 01:04:40.101885 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.75s 2026-03-08 01:04:40.101891 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 5.89s 2026-03-08 01:04:40.101897 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 5.02s 2026-03-08 01:04:40.101903 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.87s 2026-03-08 01:04:40.101910 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.51s 2026-03-08 01:04:40.101917 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.49s 2026-03-08 01:04:40.101923 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.30s 2026-03-08 01:04:40.101930 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.67s 2026-03-08 01:04:40.101937 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.63s 2026-03-08 01:04:40.101944 | orchestrator | 2026-03-08 01:04:40.101950 | orchestrator | 2026-03-08 01:04:40.101957 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-08 01:04:40.101963 | orchestrator | 2026-03-08 01:04:40.101970 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-08 01:04:40.101976 | orchestrator | Sunday 08 March 2026 01:02:50 +0000 (0:00:00.285) 0:00:00.285 ********** 2026-03-08 01:04:40.101990 | orchestrator | ok: [testbed-node-0] 2026-03-08 01:04:40.101997 | orchestrator | ok: [testbed-node-1] 2026-03-08 01:04:40.102003 | orchestrator | ok: [testbed-node-2] 2026-03-08 01:04:40.102009 | orchestrator | 2026-03-08 01:04:40.102063 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-08 01:04:40.102071 | orchestrator | Sunday 08 March 2026 01:02:50 +0000 (0:00:00.325) 0:00:00.610 ********** 2026-03-08 01:04:40.102078 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-03-08 01:04:40.102086 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-03-08 01:04:40.102092 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-03-08 01:04:40.102099 | orchestrator | 2026-03-08 01:04:40.102105 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-03-08 01:04:40.102112 | orchestrator | 2026-03-08 01:04:40.102124 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-08 01:04:40.102131 | orchestrator | Sunday 08 March 2026 01:02:50 +0000 (0:00:00.448) 0:00:01.059 ********** 2026-03-08 01:04:40.102138 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 01:04:40.102145 | orchestrator | 2026-03-08 01:04:40.102152 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2026-03-08 01:04:40.102159 | orchestrator | Sunday 08 March 2026 01:02:51 +0000 (0:00:00.551) 0:00:01.610 ********** 2026-03-08 01:04:40.102166 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-03-08 01:04:40.102173 | orchestrator | 2026-03-08 01:04:40.102180 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2026-03-08 01:04:40.102187 | orchestrator | Sunday 08 March 2026 01:02:55 +0000 (0:00:03.724) 0:00:05.335 ********** 2026-03-08 01:04:40.102193 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-03-08 01:04:40.102200 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-03-08 01:04:40.102206 | orchestrator | 2026-03-08 01:04:40.102212 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-03-08 01:04:40.102219 | orchestrator | Sunday 08 March 2026 01:03:02 +0000 (0:00:06.930) 0:00:12.266 ********** 2026-03-08 01:04:40.102225 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-08 01:04:40.102231 | orchestrator | 2026-03-08 01:04:40.102238 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-03-08 01:04:40.102244 | orchestrator | Sunday 08 March 2026 01:03:05 +0000 (0:00:03.118) 0:00:15.384 ********** 2026-03-08 01:04:40.102251 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-03-08 01:04:40.102258 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-08 01:04:40.102264 | orchestrator | 2026-03-08 01:04:40.102272 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-03-08 01:04:40.102278 | orchestrator | Sunday 08 March 2026 01:03:09 +0000 (0:00:03.847) 0:00:19.231 ********** 2026-03-08 01:04:40.102285 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-08 01:04:40.102291 | orchestrator | 2026-03-08 01:04:40.102297 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2026-03-08 01:04:40.102304 | orchestrator | Sunday 08 March 2026 01:03:12 +0000 (0:00:03.275) 0:00:22.507 ********** 2026-03-08 01:04:40.102310 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-03-08 01:04:40.102316 | orchestrator | 2026-03-08 01:04:40.102323 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-03-08 01:04:40.102330 | orchestrator | Sunday 08 March 2026 01:03:16 +0000 (0:00:04.222) 0:00:26.729 ********** 2026-03-08 01:04:40.102336 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:04:40.102343 | orchestrator | 2026-03-08 01:04:40.102349 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-03-08 01:04:40.102367 | orchestrator | Sunday 08 March 2026 01:03:20 +0000 (0:00:03.742) 0:00:30.472 ********** 2026-03-08 01:04:40.102374 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:04:40.102381 | orchestrator | 2026-03-08 01:04:40.102387 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-03-08 01:04:40.102394 | orchestrator | Sunday 08 March 2026 01:03:24 +0000 (0:00:03.751) 0:00:34.224 ********** 2026-03-08 01:04:40.102401 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:04:40.102407 | orchestrator | 2026-03-08 01:04:40.102414 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-03-08 01:04:40.102421 | orchestrator | Sunday 08 March 2026 01:03:27 +0000 (0:00:03.525) 0:00:37.749 ********** 2026-03-08 01:04:40.102430 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-08 01:04:40.102443 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-08 01:04:40.102450 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-08 01:04:40.102458 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-08 01:04:40.102477 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-08 01:04:40.102485 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-08 01:04:40.102492 | orchestrator | 2026-03-08 01:04:40.102498 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-03-08 01:04:40.102505 | orchestrator | Sunday 08 March 2026 01:03:29 +0000 (0:00:01.510) 0:00:39.260 ********** 2026-03-08 01:04:40.102512 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:04:40.102518 | orchestrator | 2026-03-08 01:04:40.102525 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-03-08 01:04:40.102532 | orchestrator | Sunday 08 March 2026 01:03:29 +0000 (0:00:00.294) 0:00:39.554 ********** 2026-03-08 01:04:40.102538 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:04:40.102545 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:04:40.102552 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:04:40.102558 | orchestrator | 2026-03-08 01:04:40.102565 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-03-08 01:04:40.102571 | orchestrator | Sunday 08 March 2026 01:03:30 +0000 (0:00:00.618) 0:00:40.172 ********** 2026-03-08 01:04:40.102577 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-08 01:04:40.102583 | orchestrator | 2026-03-08 01:04:40.102592 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-03-08 01:04:40.102599 | orchestrator | Sunday 08 March 2026 01:03:31 +0000 (0:00:01.003) 0:00:41.176 ********** 2026-03-08 01:04:40.102606 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-08 01:04:40.102613 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-08 01:04:40.102628 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-08 01:04:40.102636 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-08 01:04:40.102645 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-08 01:04:40.102652 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-08 01:04:40.102659 | orchestrator | 2026-03-08 01:04:40.102665 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-03-08 01:04:40.102675 | orchestrator | Sunday 08 March 2026 01:03:33 +0000 (0:00:02.571) 0:00:43.747 ********** 2026-03-08 01:04:40.102681 | orchestrator | ok: [testbed-node-0] 2026-03-08 01:04:40.102688 | orchestrator | ok: [testbed-node-1] 2026-03-08 01:04:40.102695 | orchestrator | ok: [testbed-node-2] 2026-03-08 01:04:40.102701 | orchestrator | 2026-03-08 01:04:40.102708 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-08 01:04:40.102715 | orchestrator | Sunday 08 March 2026 01:03:33 +0000 (0:00:00.322) 0:00:44.070 ********** 2026-03-08 01:04:40.102722 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 01:04:40.102761 | orchestrator | 2026-03-08 01:04:40.102772 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-03-08 01:04:40.102779 | orchestrator | Sunday 08 March 2026 01:03:35 +0000 (0:00:01.312) 0:00:45.382 ********** 2026-03-08 01:04:40.102791 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-08 01:04:40.102799 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-08 01:04:40.102810 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-08 01:04:40.102817 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-08 01:04:40.102828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-08 01:04:40.102841 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-08 01:04:40.102848 | orchestrator | 2026-03-08 01:04:40.102855 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-03-08 01:04:40.102862 | orchestrator | Sunday 08 March 2026 01:03:37 +0000 (0:00:02.538) 0:00:47.921 ********** 2026-03-08 01:04:40.102870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-08 01:04:40.102880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-08 01:04:40.102887 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:04:40.102894 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-08 01:04:40.102907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-08 01:04:40.102915 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:04:40.102927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-08 01:04:40.102933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-08 01:04:40.102940 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:04:40.102946 | orchestrator | 2026-03-08 01:04:40.102952 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-03-08 01:04:40.102958 | orchestrator | Sunday 08 March 2026 01:03:38 +0000 (0:00:00.603) 0:00:48.525 ********** 2026-03-08 01:04:40.102966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-08 01:04:40.102977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-08 01:04:40.102983 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:04:40.102992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-08 01:04:40.102999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-08 01:04:40.103005 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:04:40.103012 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-08 01:04:40.103021 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-08 01:04:40.103031 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:04:40.103038 | orchestrator | 2026-03-08 01:04:40.103044 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-03-08 01:04:40.103051 | orchestrator | Sunday 08 March 2026 01:03:39 +0000 (0:00:01.105) 0:00:49.630 ********** 2026-03-08 01:04:40.103057 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-08 01:04:40.103068 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-08 01:04:40.103075 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-08 01:04:40.103084 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-08 01:04:40.103094 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-08 01:04:40.103101 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-08 01:04:40.103107 | orchestrator | 2026-03-08 01:04:40.103114 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-03-08 01:04:40.103121 | orchestrator | Sunday 08 March 2026 01:03:41 +0000 (0:00:02.401) 0:00:52.032 ********** 2026-03-08 01:04:40.103131 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-08 01:04:40.103138 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-08 01:04:40.103150 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-08 01:04:40.103158 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-08 01:04:40.103165 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-08 01:04:40.103175 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-08 01:04:40.103182 | orchestrator | 2026-03-08 01:04:40.103188 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-03-08 01:04:40.103195 | orchestrator | Sunday 08 March 2026 01:03:47 +0000 (0:00:05.879) 0:00:57.912 ********** 2026-03-08 01:04:40.103202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-08 01:04:40.103214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-08 01:04:40.103221 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:04:40.103228 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-08 01:04:40.103234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-08 01:04:40.103241 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:04:40.103252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-08 01:04:40.103259 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-08 01:04:40.103268 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:04:40.103275 | orchestrator | 2026-03-08 01:04:40.103282 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2026-03-08 01:04:40.103288 | orchestrator | Sunday 08 March 2026 01:03:48 +0000 (0:00:00.664) 0:00:58.577 ********** 2026-03-08 01:04:40.103297 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-08 01:04:40.103304 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-08 01:04:40.103314 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-08 01:04:40.103321 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-08 01:04:40.103332 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-08 01:04:40.103341 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-08 01:04:40.103347 | orchestrator | 2026-03-08 01:04:40.103355 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-08 01:04:40.103362 | orchestrator | Sunday 08 March 2026 01:03:50 +0000 (0:00:02.360) 0:01:00.937 ********** 2026-03-08 01:04:40.103369 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:04:40.103376 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:04:40.103382 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:04:40.103388 | orchestrator | 2026-03-08 01:04:40.103394 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-03-08 01:04:40.103401 | orchestrator | Sunday 08 March 2026 01:03:51 +0000 (0:00:00.308) 0:01:01.245 ********** 2026-03-08 01:04:40.103408 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:04:40.103415 | orchestrator | 2026-03-08 01:04:40.103452 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-03-08 01:04:40.103460 | orchestrator | Sunday 08 March 2026 01:03:53 +0000 (0:00:02.484) 0:01:03.730 ********** 2026-03-08 01:04:40.103467 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:04:40.103474 | orchestrator | 2026-03-08 01:04:40.103481 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-03-08 01:04:40.103488 | orchestrator | Sunday 08 March 2026 01:03:55 +0000 (0:00:02.157) 0:01:05.887 ********** 2026-03-08 01:04:40.103495 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:04:40.103502 | orchestrator | 2026-03-08 01:04:40.103508 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-08 01:04:40.103515 | orchestrator | Sunday 08 March 2026 01:04:12 +0000 (0:00:17.171) 0:01:23.059 ********** 2026-03-08 01:04:40.103521 | orchestrator | 2026-03-08 01:04:40.103527 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-08 01:04:40.103534 | orchestrator | Sunday 08 March 2026 01:04:12 +0000 (0:00:00.080) 0:01:23.140 ********** 2026-03-08 01:04:40.103541 | orchestrator | 2026-03-08 01:04:40.103548 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-08 01:04:40.103555 | orchestrator | Sunday 08 March 2026 01:04:13 +0000 (0:00:00.105) 0:01:23.246 ********** 2026-03-08 01:04:40.103562 | orchestrator | 2026-03-08 01:04:40.103568 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-03-08 01:04:40.103582 | orchestrator | Sunday 08 March 2026 01:04:13 +0000 (0:00:00.080) 0:01:23.326 ********** 2026-03-08 01:04:40.103589 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:04:40.103595 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:04:40.103602 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:04:40.103609 | orchestrator | 2026-03-08 01:04:40.103616 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-03-08 01:04:40.103628 | orchestrator | Sunday 08 March 2026 01:04:27 +0000 (0:00:14.458) 0:01:37.785 ********** 2026-03-08 01:04:40.103635 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:04:40.103642 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:04:40.103648 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:04:40.103655 | orchestrator | 2026-03-08 01:04:40.103661 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 01:04:40.103668 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-08 01:04:40.103675 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-08 01:04:40.103682 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-08 01:04:40.103689 | orchestrator | 2026-03-08 01:04:40.103696 | orchestrator | 2026-03-08 01:04:40.103703 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 01:04:40.103709 | orchestrator | Sunday 08 March 2026 01:04:37 +0000 (0:00:10.351) 0:01:48.136 ********** 2026-03-08 01:04:40.103715 | orchestrator | =============================================================================== 2026-03-08 01:04:40.103721 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 17.17s 2026-03-08 01:04:40.103728 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 14.46s 2026-03-08 01:04:40.103748 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 10.35s 2026-03-08 01:04:40.103754 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.93s 2026-03-08 01:04:40.103761 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 5.88s 2026-03-08 01:04:40.103767 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.22s 2026-03-08 01:04:40.103773 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.85s 2026-03-08 01:04:40.103780 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 3.75s 2026-03-08 01:04:40.103786 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.74s 2026-03-08 01:04:40.103793 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.72s 2026-03-08 01:04:40.103800 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.53s 2026-03-08 01:04:40.103806 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.28s 2026-03-08 01:04:40.103817 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.12s 2026-03-08 01:04:40.103824 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.57s 2026-03-08 01:04:40.103830 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.54s 2026-03-08 01:04:40.103836 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.48s 2026-03-08 01:04:40.103842 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.40s 2026-03-08 01:04:40.103848 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.36s 2026-03-08 01:04:40.103855 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.16s 2026-03-08 01:04:40.103861 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.51s 2026-03-08 01:04:40.103962 | orchestrator | 2026-03-08 01:04:40 | INFO  | Task 85f31e5b-4889-4820-908c-e206d9d1f706 is in state STARTED 2026-03-08 01:04:40.107992 | orchestrator | 2026-03-08 01:04:40 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:04:40.108524 | orchestrator | 2026-03-08 01:04:40 | INFO  | Task 4597916f-c18d-4669-9a7f-2423ee4e283a is in state STARTED 2026-03-08 01:04:40.109407 | orchestrator | 2026-03-08 01:04:40 | INFO  | Task 20eb2391-c27d-419d-9f33-cf74508326df is in state STARTED 2026-03-08 01:04:40.109435 | orchestrator | 2026-03-08 01:04:40 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:04:43.136651 | orchestrator | 2026-03-08 01:04:43 | INFO  | Task 85f31e5b-4889-4820-908c-e206d9d1f706 is in state STARTED 2026-03-08 01:04:43.136880 | orchestrator | 2026-03-08 01:04:43 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:04:43.139267 | orchestrator | 2026-03-08 01:04:43 | INFO  | Task 4597916f-c18d-4669-9a7f-2423ee4e283a is in state STARTED 2026-03-08 01:04:43.139312 | orchestrator | 2026-03-08 01:04:43 | INFO  | Task 20eb2391-c27d-419d-9f33-cf74508326df is in state STARTED 2026-03-08 01:04:43.139320 | orchestrator | 2026-03-08 01:04:43 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:04:46.166188 | orchestrator | 2026-03-08 01:04:46 | INFO  | Task 85f31e5b-4889-4820-908c-e206d9d1f706 is in state STARTED 2026-03-08 01:04:46.166787 | orchestrator | 2026-03-08 01:04:46 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:04:46.167504 | orchestrator | 2026-03-08 01:04:46 | INFO  | Task 4597916f-c18d-4669-9a7f-2423ee4e283a is in state STARTED 2026-03-08 01:04:46.168403 | orchestrator | 2026-03-08 01:04:46 | INFO  | Task 20eb2391-c27d-419d-9f33-cf74508326df is in state STARTED 2026-03-08 01:04:46.168430 | orchestrator | 2026-03-08 01:04:46 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:04:49.211353 | orchestrator | 2026-03-08 01:04:49 | INFO  | Task 85f31e5b-4889-4820-908c-e206d9d1f706 is in state STARTED 2026-03-08 01:04:49.213193 | orchestrator | 2026-03-08 01:04:49 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:04:49.215367 | orchestrator | 2026-03-08 01:04:49 | INFO  | Task 4597916f-c18d-4669-9a7f-2423ee4e283a is in state STARTED 2026-03-08 01:04:49.217341 | orchestrator | 2026-03-08 01:04:49 | INFO  | Task 20eb2391-c27d-419d-9f33-cf74508326df is in state STARTED 2026-03-08 01:04:49.217371 | orchestrator | 2026-03-08 01:04:49 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:04:52.261777 | orchestrator | 2026-03-08 01:04:52 | INFO  | Task 85f31e5b-4889-4820-908c-e206d9d1f706 is in state STARTED 2026-03-08 01:04:52.261851 | orchestrator | 2026-03-08 01:04:52 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:04:52.262346 | orchestrator | 2026-03-08 01:04:52 | INFO  | Task 4597916f-c18d-4669-9a7f-2423ee4e283a is in state STARTED 2026-03-08 01:04:52.263320 | orchestrator | 2026-03-08 01:04:52 | INFO  | Task 20eb2391-c27d-419d-9f33-cf74508326df is in state STARTED 2026-03-08 01:04:52.263355 | orchestrator | 2026-03-08 01:04:52 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:04:55.304160 | orchestrator | 2026-03-08 01:04:55 | INFO  | Task 85f31e5b-4889-4820-908c-e206d9d1f706 is in state STARTED 2026-03-08 01:04:55.310063 | orchestrator | 2026-03-08 01:04:55 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:04:55.312245 | orchestrator | 2026-03-08 01:04:55 | INFO  | Task 4597916f-c18d-4669-9a7f-2423ee4e283a is in state STARTED 2026-03-08 01:04:55.317646 | orchestrator | 2026-03-08 01:04:55 | INFO  | Task 20eb2391-c27d-419d-9f33-cf74508326df is in state STARTED 2026-03-08 01:04:55.317752 | orchestrator | 2026-03-08 01:04:55 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:04:58.363379 | orchestrator | 2026-03-08 01:04:58 | INFO  | Task 85f31e5b-4889-4820-908c-e206d9d1f706 is in state STARTED 2026-03-08 01:04:58.364614 | orchestrator | 2026-03-08 01:04:58 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:04:58.366108 | orchestrator | 2026-03-08 01:04:58 | INFO  | Task 4597916f-c18d-4669-9a7f-2423ee4e283a is in state STARTED 2026-03-08 01:04:58.367330 | orchestrator | 2026-03-08 01:04:58 | INFO  | Task 20eb2391-c27d-419d-9f33-cf74508326df is in state STARTED 2026-03-08 01:04:58.367362 | orchestrator | 2026-03-08 01:04:58 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:05:01.417591 | orchestrator | 2026-03-08 01:05:01 | INFO  | Task 85f31e5b-4889-4820-908c-e206d9d1f706 is in state STARTED 2026-03-08 01:05:01.419072 | orchestrator | 2026-03-08 01:05:01 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:05:01.419909 | orchestrator | 2026-03-08 01:05:01 | INFO  | Task 4597916f-c18d-4669-9a7f-2423ee4e283a is in state STARTED 2026-03-08 01:05:01.421275 | orchestrator | 2026-03-08 01:05:01 | INFO  | Task 20eb2391-c27d-419d-9f33-cf74508326df is in state STARTED 2026-03-08 01:05:01.421305 | orchestrator | 2026-03-08 01:05:01 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:05:04.464538 | orchestrator | 2026-03-08 01:05:04 | INFO  | Task 85f31e5b-4889-4820-908c-e206d9d1f706 is in state STARTED 2026-03-08 01:05:04.466902 | orchestrator | 2026-03-08 01:05:04 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:05:04.469312 | orchestrator | 2026-03-08 01:05:04 | INFO  | Task 4597916f-c18d-4669-9a7f-2423ee4e283a is in state STARTED 2026-03-08 01:05:04.470534 | orchestrator | 2026-03-08 01:05:04 | INFO  | Task 20eb2391-c27d-419d-9f33-cf74508326df is in state STARTED 2026-03-08 01:05:04.470637 | orchestrator | 2026-03-08 01:05:04 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:05:07.536409 | orchestrator | 2026-03-08 01:05:07 | INFO  | Task 85f31e5b-4889-4820-908c-e206d9d1f706 is in state STARTED 2026-03-08 01:05:07.538134 | orchestrator | 2026-03-08 01:05:07 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:05:07.539326 | orchestrator | 2026-03-08 01:05:07 | INFO  | Task 4597916f-c18d-4669-9a7f-2423ee4e283a is in state STARTED 2026-03-08 01:05:07.541196 | orchestrator | 2026-03-08 01:05:07 | INFO  | Task 20eb2391-c27d-419d-9f33-cf74508326df is in state STARTED 2026-03-08 01:05:07.541221 | orchestrator | 2026-03-08 01:05:07 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:05:10.587985 | orchestrator | 2026-03-08 01:05:10 | INFO  | Task 85f31e5b-4889-4820-908c-e206d9d1f706 is in state STARTED 2026-03-08 01:05:10.591787 | orchestrator | 2026-03-08 01:05:10 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:05:10.594753 | orchestrator | 2026-03-08 01:05:10 | INFO  | Task 4597916f-c18d-4669-9a7f-2423ee4e283a is in state STARTED 2026-03-08 01:05:10.598307 | orchestrator | 2026-03-08 01:05:10 | INFO  | Task 20eb2391-c27d-419d-9f33-cf74508326df is in state STARTED 2026-03-08 01:05:10.598367 | orchestrator | 2026-03-08 01:05:10 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:05:13.643911 | orchestrator | 2026-03-08 01:05:13 | INFO  | Task 85f31e5b-4889-4820-908c-e206d9d1f706 is in state STARTED 2026-03-08 01:05:13.646154 | orchestrator | 2026-03-08 01:05:13 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:05:13.646560 | orchestrator | 2026-03-08 01:05:13 | INFO  | Task 4597916f-c18d-4669-9a7f-2423ee4e283a is in state STARTED 2026-03-08 01:05:13.647680 | orchestrator | 2026-03-08 01:05:13 | INFO  | Task 20eb2391-c27d-419d-9f33-cf74508326df is in state STARTED 2026-03-08 01:05:13.647707 | orchestrator | 2026-03-08 01:05:13 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:05:16.717907 | orchestrator | 2026-03-08 01:05:16 | INFO  | Task 85f31e5b-4889-4820-908c-e206d9d1f706 is in state STARTED 2026-03-08 01:05:16.718421 | orchestrator | 2026-03-08 01:05:16 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:05:16.721944 | orchestrator | 2026-03-08 01:05:16 | INFO  | Task 4597916f-c18d-4669-9a7f-2423ee4e283a is in state STARTED 2026-03-08 01:05:16.724271 | orchestrator | 2026-03-08 01:05:16 | INFO  | Task 20eb2391-c27d-419d-9f33-cf74508326df is in state STARTED 2026-03-08 01:05:16.724324 | orchestrator | 2026-03-08 01:05:16 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:05:19.750682 | orchestrator | 2026-03-08 01:05:19 | INFO  | Task 85f31e5b-4889-4820-908c-e206d9d1f706 is in state STARTED 2026-03-08 01:05:19.751902 | orchestrator | 2026-03-08 01:05:19 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:05:19.753561 | orchestrator | 2026-03-08 01:05:19 | INFO  | Task 4597916f-c18d-4669-9a7f-2423ee4e283a is in state STARTED 2026-03-08 01:05:19.754500 | orchestrator | 2026-03-08 01:05:19 | INFO  | Task 20eb2391-c27d-419d-9f33-cf74508326df is in state STARTED 2026-03-08 01:05:19.754540 | orchestrator | 2026-03-08 01:05:19 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:05:22.774710 | orchestrator | 2026-03-08 01:05:22 | INFO  | Task 85f31e5b-4889-4820-908c-e206d9d1f706 is in state STARTED 2026-03-08 01:05:22.774757 | orchestrator | 2026-03-08 01:05:22 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:05:22.775248 | orchestrator | 2026-03-08 01:05:22 | INFO  | Task 4597916f-c18d-4669-9a7f-2423ee4e283a is in state STARTED 2026-03-08 01:05:22.775734 | orchestrator | 2026-03-08 01:05:22 | INFO  | Task 20eb2391-c27d-419d-9f33-cf74508326df is in state STARTED 2026-03-08 01:05:22.776179 | orchestrator | 2026-03-08 01:05:22 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:05:25.855257 | orchestrator | 2026-03-08 01:05:25 | INFO  | Task 85f31e5b-4889-4820-908c-e206d9d1f706 is in state STARTED 2026-03-08 01:05:25.857592 | orchestrator | 2026-03-08 01:05:25 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:05:25.857642 | orchestrator | 2026-03-08 01:05:25 | INFO  | Task 4597916f-c18d-4669-9a7f-2423ee4e283a is in state STARTED 2026-03-08 01:05:25.858159 | orchestrator | 2026-03-08 01:05:25 | INFO  | Task 20eb2391-c27d-419d-9f33-cf74508326df is in state STARTED 2026-03-08 01:05:25.858178 | orchestrator | 2026-03-08 01:05:25 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:05:28.881772 | orchestrator | 2026-03-08 01:05:28 | INFO  | Task 85f31e5b-4889-4820-908c-e206d9d1f706 is in state STARTED 2026-03-08 01:05:28.884677 | orchestrator | 2026-03-08 01:05:28 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:05:28.886244 | orchestrator | 2026-03-08 01:05:28 | INFO  | Task 4597916f-c18d-4669-9a7f-2423ee4e283a is in state STARTED 2026-03-08 01:05:28.887153 | orchestrator | 2026-03-08 01:05:28 | INFO  | Task 20eb2391-c27d-419d-9f33-cf74508326df is in state STARTED 2026-03-08 01:05:28.887367 | orchestrator | 2026-03-08 01:05:28 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:05:31.914884 | orchestrator | 2026-03-08 01:05:31 | INFO  | Task 85f31e5b-4889-4820-908c-e206d9d1f706 is in state STARTED 2026-03-08 01:05:31.915463 | orchestrator | 2026-03-08 01:05:31 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:05:31.916813 | orchestrator | 2026-03-08 01:05:31 | INFO  | Task 4597916f-c18d-4669-9a7f-2423ee4e283a is in state STARTED 2026-03-08 01:05:31.917965 | orchestrator | 2026-03-08 01:05:31 | INFO  | Task 20eb2391-c27d-419d-9f33-cf74508326df is in state STARTED 2026-03-08 01:05:31.918044 | orchestrator | 2026-03-08 01:05:31 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:05:34.947672 | orchestrator | 2026-03-08 01:05:34 | INFO  | Task 85f31e5b-4889-4820-908c-e206d9d1f706 is in state STARTED 2026-03-08 01:05:34.949383 | orchestrator | 2026-03-08 01:05:34 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:05:34.950420 | orchestrator | 2026-03-08 01:05:34 | INFO  | Task 4597916f-c18d-4669-9a7f-2423ee4e283a is in state STARTED 2026-03-08 01:05:34.951698 | orchestrator | 2026-03-08 01:05:34 | INFO  | Task 20eb2391-c27d-419d-9f33-cf74508326df is in state STARTED 2026-03-08 01:05:34.951734 | orchestrator | 2026-03-08 01:05:34 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:05:38.090979 | orchestrator | 2026-03-08 01:05:37 | INFO  | Task 85f31e5b-4889-4820-908c-e206d9d1f706 is in state STARTED 2026-03-08 01:05:38.091056 | orchestrator | 2026-03-08 01:05:37 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:05:38.091063 | orchestrator | 2026-03-08 01:05:37 | INFO  | Task 4597916f-c18d-4669-9a7f-2423ee4e283a is in state STARTED 2026-03-08 01:05:38.091067 | orchestrator | 2026-03-08 01:05:37 | INFO  | Task 20eb2391-c27d-419d-9f33-cf74508326df is in state STARTED 2026-03-08 01:05:38.091073 | orchestrator | 2026-03-08 01:05:37 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:05:41.028102 | orchestrator | 2026-03-08 01:05:41 | INFO  | Task 85f31e5b-4889-4820-908c-e206d9d1f706 is in state STARTED 2026-03-08 01:05:41.029502 | orchestrator | 2026-03-08 01:05:41 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state STARTED 2026-03-08 01:05:41.030528 | orchestrator | 2026-03-08 01:05:41 | INFO  | Task 4597916f-c18d-4669-9a7f-2423ee4e283a is in state STARTED 2026-03-08 01:05:41.031461 | orchestrator | 2026-03-08 01:05:41 | INFO  | Task 20eb2391-c27d-419d-9f33-cf74508326df is in state STARTED 2026-03-08 01:05:41.031504 | orchestrator | 2026-03-08 01:05:41 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:05:44.056085 | orchestrator | 2026-03-08 01:05:44 | INFO  | Task 85f31e5b-4889-4820-908c-e206d9d1f706 is in state STARTED 2026-03-08 01:05:44.056152 | orchestrator | 2026-03-08 01:05:44 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:05:44.056656 | orchestrator | 2026-03-08 01:05:44 | INFO  | Task 5641a33a-61e4-485a-9365-ab5890018e2e is in state SUCCESS 2026-03-08 01:05:44.057061 | orchestrator | 2026-03-08 01:05:44 | INFO  | Task 4597916f-c18d-4669-9a7f-2423ee4e283a is in state STARTED 2026-03-08 01:05:44.057973 | orchestrator | 2026-03-08 01:05:44 | INFO  | Task 20eb2391-c27d-419d-9f33-cf74508326df is in state STARTED 2026-03-08 01:05:44.058056 | orchestrator | 2026-03-08 01:05:44 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:05:47.133484 | orchestrator | 2026-03-08 01:05:47 | INFO  | Task 85f31e5b-4889-4820-908c-e206d9d1f706 is in state STARTED 2026-03-08 01:05:47.133854 | orchestrator | 2026-03-08 01:05:47 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:05:47.135380 | orchestrator | 2026-03-08 01:05:47 | INFO  | Task 4597916f-c18d-4669-9a7f-2423ee4e283a is in state STARTED 2026-03-08 01:05:47.135471 | orchestrator | 2026-03-08 01:05:47 | INFO  | Task 20eb2391-c27d-419d-9f33-cf74508326df is in state STARTED 2026-03-08 01:05:47.135482 | orchestrator | 2026-03-08 01:05:47 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:05:50.168513 | orchestrator | 2026-03-08 01:05:50 | INFO  | Task 85f31e5b-4889-4820-908c-e206d9d1f706 is in state STARTED 2026-03-08 01:05:50.169724 | orchestrator | 2026-03-08 01:05:50 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:05:50.171038 | orchestrator | 2026-03-08 01:05:50 | INFO  | Task 4597916f-c18d-4669-9a7f-2423ee4e283a is in state STARTED 2026-03-08 01:05:50.172775 | orchestrator | 2026-03-08 01:05:50 | INFO  | Task 20eb2391-c27d-419d-9f33-cf74508326df is in state STARTED 2026-03-08 01:05:50.172826 | orchestrator | 2026-03-08 01:05:50 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:05:53.212949 | orchestrator | 2026-03-08 01:05:53 | INFO  | Task 85f31e5b-4889-4820-908c-e206d9d1f706 is in state STARTED 2026-03-08 01:05:53.213206 | orchestrator | 2026-03-08 01:05:53 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:05:53.213816 | orchestrator | 2026-03-08 01:05:53 | INFO  | Task 4597916f-c18d-4669-9a7f-2423ee4e283a is in state STARTED 2026-03-08 01:05:53.214297 | orchestrator | 2026-03-08 01:05:53 | INFO  | Task 20eb2391-c27d-419d-9f33-cf74508326df is in state STARTED 2026-03-08 01:05:53.214314 | orchestrator | 2026-03-08 01:05:53 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:05:56.269162 | orchestrator | 2026-03-08 01:05:56 | INFO  | Task 85f31e5b-4889-4820-908c-e206d9d1f706 is in state STARTED 2026-03-08 01:05:56.269267 | orchestrator | 2026-03-08 01:05:56 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:05:56.269727 | orchestrator | 2026-03-08 01:05:56 | INFO  | Task 4597916f-c18d-4669-9a7f-2423ee4e283a is in state STARTED 2026-03-08 01:05:56.270220 | orchestrator | 2026-03-08 01:05:56 | INFO  | Task 20eb2391-c27d-419d-9f33-cf74508326df is in state STARTED 2026-03-08 01:05:56.271554 | orchestrator | 2026-03-08 01:05:56 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:05:59.296973 | orchestrator | 2026-03-08 01:05:59 | INFO  | Task 85f31e5b-4889-4820-908c-e206d9d1f706 is in state STARTED 2026-03-08 01:05:59.297072 | orchestrator | 2026-03-08 01:05:59 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:05:59.297891 | orchestrator | 2026-03-08 01:05:59 | INFO  | Task 4597916f-c18d-4669-9a7f-2423ee4e283a is in state STARTED 2026-03-08 01:05:59.298541 | orchestrator | 2026-03-08 01:05:59 | INFO  | Task 20eb2391-c27d-419d-9f33-cf74508326df is in state STARTED 2026-03-08 01:05:59.298616 | orchestrator | 2026-03-08 01:05:59 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:06:02.345131 | orchestrator | 2026-03-08 01:06:02 | INFO  | Task 85f31e5b-4889-4820-908c-e206d9d1f706 is in state STARTED 2026-03-08 01:06:02.347010 | orchestrator | 2026-03-08 01:06:02 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:06:02.350895 | orchestrator | 2026-03-08 01:06:02 | INFO  | Task 4597916f-c18d-4669-9a7f-2423ee4e283a is in state STARTED 2026-03-08 01:06:02.351726 | orchestrator | 2026-03-08 01:06:02 | INFO  | Task 20eb2391-c27d-419d-9f33-cf74508326df is in state STARTED 2026-03-08 01:06:02.351763 | orchestrator | 2026-03-08 01:06:02 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:06:05.382411 | orchestrator | 2026-03-08 01:06:05 | INFO  | Task 85f31e5b-4889-4820-908c-e206d9d1f706 is in state STARTED 2026-03-08 01:06:05.384955 | orchestrator | 2026-03-08 01:06:05 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:06:05.385689 | orchestrator | 2026-03-08 01:06:05 | INFO  | Task 4597916f-c18d-4669-9a7f-2423ee4e283a is in state STARTED 2026-03-08 01:06:05.387125 | orchestrator | 2026-03-08 01:06:05 | INFO  | Task 20eb2391-c27d-419d-9f33-cf74508326df is in state STARTED 2026-03-08 01:06:05.387160 | orchestrator | 2026-03-08 01:06:05 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:06:08.409723 | orchestrator | 2026-03-08 01:06:08 | INFO  | Task 85f31e5b-4889-4820-908c-e206d9d1f706 is in state STARTED 2026-03-08 01:06:08.410931 | orchestrator | 2026-03-08 01:06:08 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:06:08.411764 | orchestrator | 2026-03-08 01:06:08 | INFO  | Task 4597916f-c18d-4669-9a7f-2423ee4e283a is in state STARTED 2026-03-08 01:06:08.412407 | orchestrator | 2026-03-08 01:06:08 | INFO  | Task 20eb2391-c27d-419d-9f33-cf74508326df is in state STARTED 2026-03-08 01:06:08.412431 | orchestrator | 2026-03-08 01:06:08 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:06:11.441842 | orchestrator | 2026-03-08 01:06:11 | INFO  | Task 85f31e5b-4889-4820-908c-e206d9d1f706 is in state STARTED 2026-03-08 01:06:11.442070 | orchestrator | 2026-03-08 01:06:11 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:06:11.442676 | orchestrator | 2026-03-08 01:06:11 | INFO  | Task 4597916f-c18d-4669-9a7f-2423ee4e283a is in state STARTED 2026-03-08 01:06:11.443309 | orchestrator | 2026-03-08 01:06:11 | INFO  | Task 20eb2391-c27d-419d-9f33-cf74508326df is in state STARTED 2026-03-08 01:06:11.443348 | orchestrator | 2026-03-08 01:06:11 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:06:14.507156 | orchestrator | 2026-03-08 01:06:14 | INFO  | Task 85f31e5b-4889-4820-908c-e206d9d1f706 is in state STARTED 2026-03-08 01:06:14.507238 | orchestrator | 2026-03-08 01:06:14 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:06:14.507256 | orchestrator | 2026-03-08 01:06:14 | INFO  | Task 4597916f-c18d-4669-9a7f-2423ee4e283a is in state STARTED 2026-03-08 01:06:14.507271 | orchestrator | 2026-03-08 01:06:14 | INFO  | Task 20eb2391-c27d-419d-9f33-cf74508326df is in state STARTED 2026-03-08 01:06:14.507285 | orchestrator | 2026-03-08 01:06:14 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:06:17.526953 | orchestrator | 2026-03-08 01:06:17 | INFO  | Task 85f31e5b-4889-4820-908c-e206d9d1f706 is in state STARTED 2026-03-08 01:06:17.527497 | orchestrator | 2026-03-08 01:06:17 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:06:17.530254 | orchestrator | 2026-03-08 01:06:17 | INFO  | Task 4597916f-c18d-4669-9a7f-2423ee4e283a is in state STARTED 2026-03-08 01:06:17.531143 | orchestrator | 2026-03-08 01:06:17 | INFO  | Task 20eb2391-c27d-419d-9f33-cf74508326df is in state STARTED 2026-03-08 01:06:17.531184 | orchestrator | 2026-03-08 01:06:17 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:06:20.572939 | orchestrator | 2026-03-08 01:06:20 | INFO  | Task 85f31e5b-4889-4820-908c-e206d9d1f706 is in state STARTED 2026-03-08 01:06:20.573284 | orchestrator | 2026-03-08 01:06:20 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:06:20.576596 | orchestrator | 2026-03-08 01:06:20 | INFO  | Task 4597916f-c18d-4669-9a7f-2423ee4e283a is in state STARTED 2026-03-08 01:06:20.576668 | orchestrator | 2026-03-08 01:06:20 | INFO  | Task 20eb2391-c27d-419d-9f33-cf74508326df is in state STARTED 2026-03-08 01:06:20.576674 | orchestrator | 2026-03-08 01:06:20 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:06:23.635331 | orchestrator | 2026-03-08 01:06:23 | INFO  | Task 85f31e5b-4889-4820-908c-e206d9d1f706 is in state STARTED 2026-03-08 01:06:23.637376 | orchestrator | 2026-03-08 01:06:23 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:06:23.638104 | orchestrator | 2026-03-08 01:06:23 | INFO  | Task 4597916f-c18d-4669-9a7f-2423ee4e283a is in state STARTED 2026-03-08 01:06:23.638960 | orchestrator | 2026-03-08 01:06:23 | INFO  | Task 20eb2391-c27d-419d-9f33-cf74508326df is in state STARTED 2026-03-08 01:06:23.638995 | orchestrator | 2026-03-08 01:06:23 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:06:26.674937 | orchestrator | 2026-03-08 01:06:26 | INFO  | Task 85f31e5b-4889-4820-908c-e206d9d1f706 is in state STARTED 2026-03-08 01:06:26.676469 | orchestrator | 2026-03-08 01:06:26 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:06:26.680828 | orchestrator | 2026-03-08 01:06:26 | INFO  | Task 4597916f-c18d-4669-9a7f-2423ee4e283a is in state STARTED 2026-03-08 01:06:26.682653 | orchestrator | 2026-03-08 01:06:26 | INFO  | Task 20eb2391-c27d-419d-9f33-cf74508326df is in state STARTED 2026-03-08 01:06:26.683681 | orchestrator | 2026-03-08 01:06:26 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:06:29.729326 | orchestrator | 2026-03-08 01:06:29 | INFO  | Task a616cf87-0e11-4eb2-b8e5-9c2348dde5f0 is in state STARTED 2026-03-08 01:06:29.731089 | orchestrator | 2026-03-08 01:06:29 | INFO  | Task 85f31e5b-4889-4820-908c-e206d9d1f706 is in state STARTED 2026-03-08 01:06:29.732802 | orchestrator | 2026-03-08 01:06:29 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:06:29.734191 | orchestrator | 2026-03-08 01:06:29 | INFO  | Task 4597916f-c18d-4669-9a7f-2423ee4e283a is in state STARTED 2026-03-08 01:06:29.738395 | orchestrator | 2026-03-08 01:06:29 | INFO  | Task 20eb2391-c27d-419d-9f33-cf74508326df is in state SUCCESS 2026-03-08 01:06:29.740262 | orchestrator | 2026-03-08 01:06:29.740299 | orchestrator | 2026-03-08 01:06:29.740308 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2026-03-08 01:06:29.740316 | orchestrator | 2026-03-08 01:06:29.740324 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2026-03-08 01:06:29.740332 | orchestrator | Sunday 08 March 2026 00:59:42 +0000 (0:00:00.121) 0:00:00.121 ********** 2026-03-08 01:06:29.740339 | orchestrator | changed: [localhost] 2026-03-08 01:06:29.740345 | orchestrator | 2026-03-08 01:06:29.740352 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2026-03-08 01:06:29.740358 | orchestrator | Sunday 08 March 2026 00:59:44 +0000 (0:00:01.321) 0:00:01.443 ********** 2026-03-08 01:06:29.740366 | orchestrator | 2026-03-08 01:06:29.740373 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-03-08 01:06:29.740381 | orchestrator | 2026-03-08 01:06:29.740388 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-03-08 01:06:29.740395 | orchestrator | 2026-03-08 01:06:29.740403 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-03-08 01:06:29.740410 | orchestrator | 2026-03-08 01:06:29.740418 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-03-08 01:06:29.740425 | orchestrator | 2026-03-08 01:06:29.740432 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-03-08 01:06:29.740439 | orchestrator | 2026-03-08 01:06:29.740445 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-03-08 01:06:29.740452 | orchestrator | 2026-03-08 01:06:29.740459 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-03-08 01:06:29.740466 | orchestrator | changed: [localhost] 2026-03-08 01:06:29.740472 | orchestrator | 2026-03-08 01:06:29.740478 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2026-03-08 01:06:29.740535 | orchestrator | Sunday 08 March 2026 01:05:29 +0000 (0:05:44.939) 0:05:46.382 ********** 2026-03-08 01:06:29.740544 | orchestrator | changed: [localhost] 2026-03-08 01:06:29.740551 | orchestrator | 2026-03-08 01:06:29.740557 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-08 01:06:29.740564 | orchestrator | 2026-03-08 01:06:29.740570 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-08 01:06:29.740577 | orchestrator | Sunday 08 March 2026 01:05:41 +0000 (0:00:12.284) 0:05:58.666 ********** 2026-03-08 01:06:29.740584 | orchestrator | ok: [testbed-node-0] 2026-03-08 01:06:29.740590 | orchestrator | ok: [testbed-node-1] 2026-03-08 01:06:29.740597 | orchestrator | ok: [testbed-node-2] 2026-03-08 01:06:29.740603 | orchestrator | 2026-03-08 01:06:29.740610 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-08 01:06:29.740617 | orchestrator | Sunday 08 March 2026 01:05:42 +0000 (0:00:00.594) 0:05:59.261 ********** 2026-03-08 01:06:29.740623 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2026-03-08 01:06:29.740627 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2026-03-08 01:06:29.740632 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2026-03-08 01:06:29.740637 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2026-03-08 01:06:29.740641 | orchestrator | 2026-03-08 01:06:29.740646 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2026-03-08 01:06:29.740650 | orchestrator | skipping: no hosts matched 2026-03-08 01:06:29.740655 | orchestrator | 2026-03-08 01:06:29.740660 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 01:06:29.740664 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 01:06:29.740678 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 01:06:29.740687 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 01:06:29.740693 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 01:06:29.740699 | orchestrator | 2026-03-08 01:06:29.740705 | orchestrator | 2026-03-08 01:06:29.740711 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 01:06:29.740717 | orchestrator | Sunday 08 March 2026 01:05:42 +0000 (0:00:00.761) 0:06:00.022 ********** 2026-03-08 01:06:29.740723 | orchestrator | =============================================================================== 2026-03-08 01:06:29.740729 | orchestrator | Download ironic-agent initramfs --------------------------------------- 344.94s 2026-03-08 01:06:29.740735 | orchestrator | Download ironic-agent kernel ------------------------------------------- 12.28s 2026-03-08 01:06:29.740742 | orchestrator | Ensure the destination directory exists --------------------------------- 1.32s 2026-03-08 01:06:29.740748 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.76s 2026-03-08 01:06:29.740754 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.59s 2026-03-08 01:06:29.740760 | orchestrator | 2026-03-08 01:06:29.740766 | orchestrator | 2026-03-08 01:06:29.740791 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-08 01:06:29.740797 | orchestrator | 2026-03-08 01:06:29.740803 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-08 01:06:29.740809 | orchestrator | Sunday 08 March 2026 01:03:25 +0000 (0:00:00.311) 0:00:00.311 ********** 2026-03-08 01:06:29.740816 | orchestrator | ok: [testbed-manager] 2026-03-08 01:06:29.740822 | orchestrator | ok: [testbed-node-0] 2026-03-08 01:06:29.740828 | orchestrator | ok: [testbed-node-1] 2026-03-08 01:06:29.740841 | orchestrator | ok: [testbed-node-2] 2026-03-08 01:06:29.740847 | orchestrator | ok: [testbed-node-3] 2026-03-08 01:06:29.740860 | orchestrator | ok: [testbed-node-4] 2026-03-08 01:06:29.740867 | orchestrator | ok: [testbed-node-5] 2026-03-08 01:06:29.740873 | orchestrator | 2026-03-08 01:06:29.740880 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-08 01:06:29.740886 | orchestrator | Sunday 08 March 2026 01:03:26 +0000 (0:00:00.839) 0:00:01.150 ********** 2026-03-08 01:06:29.740902 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-03-08 01:06:29.740909 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-03-08 01:06:29.740934 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-03-08 01:06:29.740940 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-03-08 01:06:29.740947 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-03-08 01:06:29.740953 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-03-08 01:06:29.740960 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-03-08 01:06:29.740966 | orchestrator | 2026-03-08 01:06:29.740973 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-03-08 01:06:29.740979 | orchestrator | 2026-03-08 01:06:29.740985 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-03-08 01:06:29.740992 | orchestrator | Sunday 08 March 2026 01:03:27 +0000 (0:00:00.707) 0:00:01.858 ********** 2026-03-08 01:06:29.740998 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 01:06:29.741005 | orchestrator | 2026-03-08 01:06:29.741012 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-03-08 01:06:29.741019 | orchestrator | Sunday 08 March 2026 01:03:28 +0000 (0:00:01.623) 0:00:03.481 ********** 2026-03-08 01:06:29.741027 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-08 01:06:29.741042 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-08 01:06:29.741053 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-08 01:06:29.741079 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-08 01:06:29.741091 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 01:06:29.741102 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-08 01:06:29.741108 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-08 01:06:29.741114 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-08 01:06:29.741121 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-08 01:06:29.741129 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-08 01:06:29.741138 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 01:06:29.741144 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-08 01:06:29.741154 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 01:06:29.741164 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 01:06:29.741171 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-08 01:06:29.741178 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-08 01:06:29.741186 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-08 01:06:29.741194 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 01:06:29.741205 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 01:06:29.741211 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-08 01:06:29.741220 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-08 01:06:29.741227 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 01:06:29.741233 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-08 01:06:29.741240 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-08 01:06:29.741246 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 01:06:29.741254 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-08 01:06:29.741264 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-08 01:06:29.741270 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 01:06:29.741280 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 01:06:29.741287 | orchestrator | 2026-03-08 01:06:29.741294 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-03-08 01:06:29.741300 | orchestrator | Sunday 08 March 2026 01:03:32 +0000 (0:00:03.505) 0:00:06.986 ********** 2026-03-08 01:06:29.741306 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 01:06:29.741313 | orchestrator | 2026-03-08 01:06:29.741319 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-03-08 01:06:29.741325 | orchestrator | Sunday 08 March 2026 01:03:33 +0000 (0:00:01.489) 0:00:08.476 ********** 2026-03-08 01:06:29.741331 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-08 01:06:29.741337 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-08 01:06:29.741346 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-08 01:06:29.741356 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-08 01:06:29.741363 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-08 01:06:29.741372 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-08 01:06:29.741378 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-08 01:06:29.741384 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 01:06:29.741390 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-08 01:06:29.741396 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 01:06:29.741411 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-08 01:06:29.741418 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 01:06:29.741424 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-08 01:06:29.741433 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-08 01:06:29.741439 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-08 01:06:29.741446 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 01:06:29.741452 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 01:06:29.741457 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-08 01:06:29.741469 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 01:06:29.741476 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-08 01:06:29.741483 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-08 01:06:29.741662 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-08 01:06:29.741672 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-08 01:06:29.741676 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-08 01:06:29.741685 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-08 01:06:29.741692 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 01:06:29.741696 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 01:06:29.741700 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 01:06:29.741707 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 01:06:29.741711 | orchestrator | 2026-03-08 01:06:29.741715 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-03-08 01:06:29.741719 | orchestrator | Sunday 08 March 2026 01:03:39 +0000 (0:00:05.796) 0:00:14.272 ********** 2026-03-08 01:06:29.741723 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-08 01:06:29.741727 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-08 01:06:29.741733 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-08 01:06:29.741740 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-08 01:06:29.741744 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 01:06:29.741748 | orchestrator | skipping: [testbed-manager] 2026-03-08 01:06:29.741754 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-08 01:06:29.741758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 01:06:29.741762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 01:06:29.741770 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-08 01:06:29.741774 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-08 01:06:29.741779 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 01:06:29.741783 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 01:06:29.741787 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 01:06:29.741793 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-08 01:06:29.741797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 01:06:29.741801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-08 01:06:29.741807 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 01:06:29.741811 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 01:06:29.741816 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-08 01:06:29.741820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 01:06:29.741824 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:06:29.741828 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:06:29.741832 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:06:29.741837 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-08 01:06:29.741846 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-08 01:06:29.741853 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-08 01:06:29.741863 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:06:29.741870 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-08 01:06:29.741876 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-08 01:06:29.741882 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-08 01:06:29.741889 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:06:29.741898 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-08 01:06:29.741905 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-08 01:06:29.741911 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-08 01:06:29.741917 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:06:29.741924 | orchestrator | 2026-03-08 01:06:29.741928 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-03-08 01:06:29.741935 | orchestrator | Sunday 08 March 2026 01:03:41 +0000 (0:00:01.627) 0:00:15.899 ********** 2026-03-08 01:06:29.741939 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-08 01:06:29.741947 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-08 01:06:29.741951 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-08 01:06:29.741958 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-08 01:06:29.741962 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 01:06:29.741966 | orchestrator | skipping: [testbed-manager] 2026-03-08 01:06:29.741970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-08 01:06:29.741977 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 01:06:29.741985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 01:06:29.741990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-08 01:06:29.741994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 01:06:29.741998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-08 01:06:29.742004 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 01:06:29.742010 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:06:29.742046 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 01:06:29.742053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-08 01:06:29.742075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 01:06:29.742082 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:06:29.742088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-08 01:06:29.742095 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 01:06:29.742102 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 01:06:29.742109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-08 01:06:29.742119 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-08 01:06:29.742126 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:06:29.742133 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-08 01:06:29.742144 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-08 01:06:29.742680 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-08 01:06:29.742697 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:06:29.742702 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-08 01:06:29.742707 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-08 01:06:29.742711 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-08 01:06:29.742714 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:06:29.742722 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-08 01:06:29.742726 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-08 01:06:29.742730 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-08 01:06:29.742738 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:06:29.742742 | orchestrator | 2026-03-08 01:06:29.742746 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-03-08 01:06:29.742750 | orchestrator | Sunday 08 March 2026 01:03:42 +0000 (0:00:01.905) 0:00:17.805 ********** 2026-03-08 01:06:29.742758 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-08 01:06:29.742762 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-08 01:06:29.742766 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-08 01:06:29.742770 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-08 01:06:29.742776 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-08 01:06:29.742780 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-08 01:06:29.742786 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-08 01:06:29.742792 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-08 01:06:29.742796 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 01:06:29.742800 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 01:06:29.742804 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-08 01:06:29.742808 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-08 01:06:29.742814 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 01:06:29.742818 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-08 01:06:29.742825 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-08 01:06:29.742831 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 01:06:29.742835 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-08 01:06:29.742839 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 01:06:29.742843 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-08 01:06:29.742850 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 01:06:29.742860 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-08 01:06:29.742867 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-08 01:06:29.742874 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-08 01:06:29.742883 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-08 01:06:29.742890 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 01:06:29.742897 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-08 01:06:29.742903 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 01:06:29.742912 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 01:06:29.742921 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 01:06:29.742928 | orchestrator | 2026-03-08 01:06:29.742934 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-03-08 01:06:29.742940 | orchestrator | Sunday 08 March 2026 01:03:49 +0000 (0:00:06.561) 0:00:24.367 ********** 2026-03-08 01:06:29.742945 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-08 01:06:29.742951 | orchestrator | 2026-03-08 01:06:29.742956 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-03-08 01:06:29.742962 | orchestrator | Sunday 08 March 2026 01:03:50 +0000 (0:00:01.234) 0:00:25.601 ********** 2026-03-08 01:06:29.742969 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1870550, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.164152, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.742980 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1870550, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.164152, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.742987 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1870550, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.164152, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.742993 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1870550, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.164152, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.743000 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1870560, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1671052, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.743016 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1870550, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.164152, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.743023 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1870560, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1671052, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.743033 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1870560, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1671052, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.743039 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1870560, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1671052, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.743046 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1870548, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.162813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.743083 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1870560, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1671052, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.743096 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1870550, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.164152, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.743105 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1870556, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.165786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.743112 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1870548, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.162813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.743119 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1870548, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.162813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.743129 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1870548, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.162813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.743136 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1870556, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.165786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.743143 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1870548, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.162813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.743154 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1870546, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1623049, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.743162 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1870556, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.165786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.743181 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1870560, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1671052, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.743188 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1870550, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.164152, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-08 01:06:29.743373 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1870546, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1623049, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.743387 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1870546, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1623049, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.743391 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1870551, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1644924, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.743400 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1870556, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.165786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.743407 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1870556, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.165786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.743411 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1870546, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1623049, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.743415 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1870551, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1644924, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.743422 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1870555, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.165786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.743426 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1870551, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1644924, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.743430 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1870548, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.162813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.743436 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1870546, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1623049, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.743440 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1870551, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1644924, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.743444 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1870555, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.165786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.743448 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1870552, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1647317, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.743480 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1870560, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1671052, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-08 01:06:29.743494 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1870556, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.165786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.743501 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1870551, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1644924, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.743505 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1870552, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1647317, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.743510 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1870555, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.165786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.743514 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1870555, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.165786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.743518 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1870549, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1638234, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.743526 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1870546, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1623049, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.743530 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1870549, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1638234, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.743536 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1870555, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.165786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.743540 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1870552, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1647317, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.743545 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1870552, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1647317, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.743549 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1870559, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1668344, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.743553 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1870548, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.162813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-08 01:06:29.743560 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1870551, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1644924, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.743564 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1870552, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1647317, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.743570 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1870549, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1638234, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.743574 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1870559, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1668344, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.743581 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1870549, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1638234, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.743585 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1870544, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1616852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.743589 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1870559, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1668344, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.743597 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1870544, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1616852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.743601 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1870559, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1668344, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.743607 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1870549, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1638234, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.743611 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1870555, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.165786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.743616 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1870544, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1616852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.743620 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1870566, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1686552, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.743624 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1870566, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1686552, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.743630 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1870544, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1616852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.743637 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1870559, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1668344, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.743641 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1870552, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1647317, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.743645 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1870556, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.165786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-08 01:06:29.743651 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1870566, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1686552, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.743655 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1870558, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1665998, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.743659 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1870544, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1616852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.743665 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1870558, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1665998, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.743671 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1870566, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1686552, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.743675 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1870547, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1625612, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.743679 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1870558, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1665998, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.743685 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1870549, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1638234, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.743689 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1870558, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1665998, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.743693 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1870566, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1686552, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.744074 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1870545, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1617858, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.744094 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1870547, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1625612, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.744098 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1870547, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1625612, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.744102 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1870559, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1668344, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.744109 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1870546, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1623049, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-08 01:06:29.744113 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1870554, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1653023, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.744117 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1870547, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1625612, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.744127 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1870545, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1617858, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.744131 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1870545, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1617858, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.744135 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1870558, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1665998, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.744139 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1870544, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1616852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.744145 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1870553, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1647859, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.744149 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1870554, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1653023, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.744153 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1870545, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1617858, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.744165 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1870547, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1625612, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.744169 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1870554, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1653023, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.744173 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1870566, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1686552, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.744177 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1870554, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1653023, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.744182 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1870553, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1647859, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.744186 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1870565, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1682026, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.744190 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:06:29.744194 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1870545, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1617858, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.744202 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1870551, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1644924, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-08 01:06:29.744207 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1870553, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1647859, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.744211 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1870553, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1647859, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.744214 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1870558, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1665998, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.744220 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1870565, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1682026, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.744224 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:06:29.744228 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1870565, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1682026, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.744236 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:06:29.744239 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1870554, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1653023, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.744245 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1870565, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1682026, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.744249 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:06:29.744253 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1870547, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1625612, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.744257 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1870553, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1647859, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.744261 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1870545, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1617858, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.744267 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1870555, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.165786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-08 01:06:29.744271 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1870565, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1682026, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.744277 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:06:29.744281 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1870554, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1653023, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.744287 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1870553, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1647859, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.744291 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1870565, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1682026, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-08 01:06:29.744295 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:06:29.744299 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1870552, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1647317, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-08 01:06:29.744303 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1870549, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1638234, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-08 01:06:29.744308 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1870559, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1668344, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-08 01:06:29.744315 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1870544, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1616852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-08 01:06:29.744319 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1870566, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1686552, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-08 01:06:29.744324 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1870558, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1665998, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-08 01:06:29.744329 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1870547, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1625612, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-08 01:06:29.744332 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1870545, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1617858, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-08 01:06:29.744336 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1870554, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1653023, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-08 01:06:29.744342 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1870553, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1647859, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-08 01:06:29.744348 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1870565, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1682026, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-08 01:06:29.744352 | orchestrator | 2026-03-08 01:06:29.744356 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-03-08 01:06:29.744360 | orchestrator | Sunday 08 March 2026 01:04:16 +0000 (0:00:25.515) 0:00:51.116 ********** 2026-03-08 01:06:29.744364 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-08 01:06:29.744368 | orchestrator | 2026-03-08 01:06:29.744372 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-03-08 01:06:29.744375 | orchestrator | Sunday 08 March 2026 01:04:17 +0000 (0:00:01.189) 0:00:52.306 ********** 2026-03-08 01:06:29.744379 | orchestrator | [WARNING]: Skipped 2026-03-08 01:06:29.744383 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-08 01:06:29.744387 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-03-08 01:06:29.744391 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-08 01:06:29.744395 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-03-08 01:06:29.744399 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-08 01:06:29.744402 | orchestrator | [WARNING]: Skipped 2026-03-08 01:06:29.744406 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-08 01:06:29.744412 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-03-08 01:06:29.744416 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-08 01:06:29.744420 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-03-08 01:06:29.744423 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-08 01:06:29.744427 | orchestrator | [WARNING]: Skipped 2026-03-08 01:06:29.744431 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-08 01:06:29.744435 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-03-08 01:06:29.744438 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-08 01:06:29.744442 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-03-08 01:06:29.744446 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-08 01:06:29.744449 | orchestrator | [WARNING]: Skipped 2026-03-08 01:06:29.744453 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-08 01:06:29.744457 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-03-08 01:06:29.744461 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-08 01:06:29.744464 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-03-08 01:06:29.744468 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-08 01:06:29.744472 | orchestrator | [WARNING]: Skipped 2026-03-08 01:06:29.744475 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-08 01:06:29.744479 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-03-08 01:06:29.744483 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-08 01:06:29.744504 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-03-08 01:06:29.744511 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-08 01:06:29.744515 | orchestrator | [WARNING]: Skipped 2026-03-08 01:06:29.744519 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-08 01:06:29.744523 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-03-08 01:06:29.744526 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-08 01:06:29.744530 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-03-08 01:06:29.744534 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-08 01:06:29.744538 | orchestrator | [WARNING]: Skipped 2026-03-08 01:06:29.744541 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-08 01:06:29.744545 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-03-08 01:06:29.744549 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-08 01:06:29.744553 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-03-08 01:06:29.744556 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-08 01:06:29.744560 | orchestrator | 2026-03-08 01:06:29.744564 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-03-08 01:06:29.744568 | orchestrator | Sunday 08 March 2026 01:04:19 +0000 (0:00:02.317) 0:00:54.624 ********** 2026-03-08 01:06:29.744572 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-08 01:06:29.744576 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:06:29.744580 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-08 01:06:29.744585 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:06:29.744589 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-08 01:06:29.744593 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:06:29.744597 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-08 01:06:29.744601 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:06:29.744605 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-08 01:06:29.744608 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:06:29.744612 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-08 01:06:29.744616 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:06:29.744620 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-03-08 01:06:29.744623 | orchestrator | 2026-03-08 01:06:29.744627 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-03-08 01:06:29.744631 | orchestrator | Sunday 08 March 2026 01:04:37 +0000 (0:00:17.279) 0:01:11.903 ********** 2026-03-08 01:06:29.744635 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-08 01:06:29.744638 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:06:29.744642 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-08 01:06:29.744646 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:06:29.744650 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-08 01:06:29.744654 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:06:29.744657 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-08 01:06:29.744661 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:06:29.744665 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-08 01:06:29.744669 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:06:29.744675 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-08 01:06:29.744682 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:06:29.744686 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-03-08 01:06:29.744689 | orchestrator | 2026-03-08 01:06:29.744693 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-03-08 01:06:29.744697 | orchestrator | Sunday 08 March 2026 01:04:40 +0000 (0:00:03.431) 0:01:15.335 ********** 2026-03-08 01:06:29.744701 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-08 01:06:29.744706 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-08 01:06:29.744711 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:06:29.744715 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:06:29.744720 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-08 01:06:29.744724 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:06:29.744728 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-08 01:06:29.744733 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:06:29.744738 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-08 01:06:29.744742 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:06:29.744747 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-08 01:06:29.744751 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:06:29.744756 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-03-08 01:06:29.744760 | orchestrator | 2026-03-08 01:06:29.744764 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-03-08 01:06:29.744769 | orchestrator | Sunday 08 March 2026 01:04:42 +0000 (0:00:01.532) 0:01:16.867 ********** 2026-03-08 01:06:29.744773 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-08 01:06:29.744778 | orchestrator | 2026-03-08 01:06:29.744782 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-03-08 01:06:29.744787 | orchestrator | Sunday 08 March 2026 01:04:42 +0000 (0:00:00.723) 0:01:17.591 ********** 2026-03-08 01:06:29.744791 | orchestrator | skipping: [testbed-manager] 2026-03-08 01:06:29.744796 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:06:29.744799 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:06:29.744803 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:06:29.744807 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:06:29.744810 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:06:29.744814 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:06:29.744818 | orchestrator | 2026-03-08 01:06:29.744822 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-03-08 01:06:29.744825 | orchestrator | Sunday 08 March 2026 01:04:43 +0000 (0:00:00.622) 0:01:18.214 ********** 2026-03-08 01:06:29.744829 | orchestrator | skipping: [testbed-manager] 2026-03-08 01:06:29.744833 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:06:29.744838 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:06:29.744842 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:06:29.744846 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:06:29.744850 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:06:29.744854 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:06:29.744857 | orchestrator | 2026-03-08 01:06:29.744861 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-03-08 01:06:29.744865 | orchestrator | Sunday 08 March 2026 01:04:45 +0000 (0:00:02.254) 0:01:20.469 ********** 2026-03-08 01:06:29.744871 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-08 01:06:29.744875 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-08 01:06:29.744879 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-08 01:06:29.744883 | orchestrator | skipping: [testbed-manager] 2026-03-08 01:06:29.744886 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:06:29.744890 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:06:29.744894 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-08 01:06:29.744898 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:06:29.744901 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-08 01:06:29.744905 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:06:29.744909 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-08 01:06:29.744912 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:06:29.744916 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-08 01:06:29.744920 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:06:29.744924 | orchestrator | 2026-03-08 01:06:29.744927 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-03-08 01:06:29.744931 | orchestrator | Sunday 08 March 2026 01:04:47 +0000 (0:00:01.608) 0:01:22.078 ********** 2026-03-08 01:06:29.744935 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-08 01:06:29.744941 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:06:29.744945 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-08 01:06:29.744949 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:06:29.744953 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-08 01:06:29.744957 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:06:29.744960 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-08 01:06:29.744964 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:06:29.744968 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-08 01:06:29.744972 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:06:29.744975 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-03-08 01:06:29.744979 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-08 01:06:29.744983 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:06:29.744987 | orchestrator | 2026-03-08 01:06:29.744990 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-03-08 01:06:29.744994 | orchestrator | Sunday 08 March 2026 01:04:48 +0000 (0:00:01.513) 0:01:23.592 ********** 2026-03-08 01:06:29.744998 | orchestrator | [WARNING]: Skipped 2026-03-08 01:06:29.745002 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-03-08 01:06:29.745005 | orchestrator | due to this access issue: 2026-03-08 01:06:29.745009 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-03-08 01:06:29.745013 | orchestrator | not a directory 2026-03-08 01:06:29.745017 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-08 01:06:29.745020 | orchestrator | 2026-03-08 01:06:29.745024 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-03-08 01:06:29.745028 | orchestrator | Sunday 08 March 2026 01:04:50 +0000 (0:00:01.282) 0:01:24.874 ********** 2026-03-08 01:06:29.745032 | orchestrator | skipping: [testbed-manager] 2026-03-08 01:06:29.745038 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:06:29.745042 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:06:29.745045 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:06:29.745049 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:06:29.745053 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:06:29.745057 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:06:29.745060 | orchestrator | 2026-03-08 01:06:29.745064 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-03-08 01:06:29.745068 | orchestrator | Sunday 08 March 2026 01:04:50 +0000 (0:00:00.917) 0:01:25.792 ********** 2026-03-08 01:06:29.745072 | orchestrator | skipping: [testbed-manager] 2026-03-08 01:06:29.745075 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:06:29.745079 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:06:29.745083 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:06:29.745087 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:06:29.745090 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:06:29.745094 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:06:29.745098 | orchestrator | 2026-03-08 01:06:29.745101 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2026-03-08 01:06:29.745105 | orchestrator | Sunday 08 March 2026 01:04:51 +0000 (0:00:00.940) 0:01:26.732 ********** 2026-03-08 01:06:29.745111 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-08 01:06:29.745115 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-08 01:06:29.745123 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-08 01:06:29.745127 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-08 01:06:29.745131 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-08 01:06:29.745137 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-08 01:06:29.745142 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 01:06:29.745149 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-08 01:06:29.745153 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-08 01:06:29.745157 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 01:06:29.745163 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 01:06:29.745167 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-08 01:06:29.745172 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-08 01:06:29.745178 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 01:06:29.745182 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-08 01:06:29.745188 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-08 01:06:29.745192 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 01:06:29.745196 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 01:06:29.745202 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-08 01:06:29.745206 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-08 01:06:29.745213 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-08 01:06:29.745217 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-08 01:06:29.745223 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-08 01:06:29.745228 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-08 01:06:29.745232 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-08 01:06:29.745238 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 01:06:29.745243 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 01:06:29.745252 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 01:06:29.745256 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-08 01:06:29.745260 | orchestrator | 2026-03-08 01:06:29.745263 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-03-08 01:06:29.745267 | orchestrator | Sunday 08 March 2026 01:04:55 +0000 (0:00:03.910) 0:01:30.642 ********** 2026-03-08 01:06:29.745271 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-08 01:06:29.745275 | orchestrator | skipping: [testbed-manager] 2026-03-08 01:06:29.745279 | orchestrator | 2026-03-08 01:06:29.745283 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-08 01:06:29.745287 | orchestrator | Sunday 08 March 2026 01:04:56 +0000 (0:00:01.147) 0:01:31.790 ********** 2026-03-08 01:06:29.745290 | orchestrator | 2026-03-08 01:06:29.745294 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-08 01:06:29.745298 | orchestrator | Sunday 08 March 2026 01:04:57 +0000 (0:00:00.133) 0:01:31.923 ********** 2026-03-08 01:06:29.745302 | orchestrator | 2026-03-08 01:06:29.745306 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-08 01:06:29.745309 | orchestrator | Sunday 08 March 2026 01:04:57 +0000 (0:00:00.088) 0:01:32.012 ********** 2026-03-08 01:06:29.745313 | orchestrator | 2026-03-08 01:06:29.745319 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-08 01:06:29.745322 | orchestrator | Sunday 08 March 2026 01:04:57 +0000 (0:00:00.063) 0:01:32.076 ********** 2026-03-08 01:06:29.745326 | orchestrator | 2026-03-08 01:06:29.745330 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-08 01:06:29.745334 | orchestrator | Sunday 08 March 2026 01:04:57 +0000 (0:00:00.258) 0:01:32.335 ********** 2026-03-08 01:06:29.745338 | orchestrator | 2026-03-08 01:06:29.745341 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-08 01:06:29.745345 | orchestrator | Sunday 08 March 2026 01:04:57 +0000 (0:00:00.064) 0:01:32.399 ********** 2026-03-08 01:06:29.745349 | orchestrator | 2026-03-08 01:06:29.745353 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-08 01:06:29.745357 | orchestrator | Sunday 08 March 2026 01:04:57 +0000 (0:00:00.064) 0:01:32.463 ********** 2026-03-08 01:06:29.745360 | orchestrator | 2026-03-08 01:06:29.745364 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-03-08 01:06:29.745368 | orchestrator | Sunday 08 March 2026 01:04:57 +0000 (0:00:00.090) 0:01:32.553 ********** 2026-03-08 01:06:29.745372 | orchestrator | changed: [testbed-manager] 2026-03-08 01:06:29.745375 | orchestrator | 2026-03-08 01:06:29.745379 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-03-08 01:06:29.745385 | orchestrator | Sunday 08 March 2026 01:05:15 +0000 (0:00:17.872) 0:01:50.426 ********** 2026-03-08 01:06:29.745389 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:06:29.745392 | orchestrator | changed: [testbed-node-4] 2026-03-08 01:06:29.745396 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:06:29.745400 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:06:29.745404 | orchestrator | changed: [testbed-node-5] 2026-03-08 01:06:29.745408 | orchestrator | changed: [testbed-manager] 2026-03-08 01:06:29.745411 | orchestrator | changed: [testbed-node-3] 2026-03-08 01:06:29.745415 | orchestrator | 2026-03-08 01:06:29.745419 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-03-08 01:06:29.745423 | orchestrator | Sunday 08 March 2026 01:05:30 +0000 (0:00:14.893) 0:02:05.319 ********** 2026-03-08 01:06:29.745426 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:06:29.745430 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:06:29.745434 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:06:29.745438 | orchestrator | 2026-03-08 01:06:29.745442 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-03-08 01:06:29.745448 | orchestrator | Sunday 08 March 2026 01:05:36 +0000 (0:00:05.881) 0:02:11.201 ********** 2026-03-08 01:06:29.745452 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:06:29.745456 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:06:29.745460 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:06:29.745463 | orchestrator | 2026-03-08 01:06:29.745467 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-03-08 01:06:29.745471 | orchestrator | Sunday 08 March 2026 01:05:43 +0000 (0:00:06.704) 0:02:17.906 ********** 2026-03-08 01:06:29.745475 | orchestrator | changed: [testbed-manager] 2026-03-08 01:06:29.745480 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:06:29.745520 | orchestrator | changed: [testbed-node-4] 2026-03-08 01:06:29.745529 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:06:29.745534 | orchestrator | changed: [testbed-node-5] 2026-03-08 01:06:29.745540 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:06:29.745546 | orchestrator | changed: [testbed-node-3] 2026-03-08 01:06:29.745552 | orchestrator | 2026-03-08 01:06:29.745558 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-03-08 01:06:29.745564 | orchestrator | Sunday 08 March 2026 01:05:57 +0000 (0:00:14.423) 0:02:32.329 ********** 2026-03-08 01:06:29.745571 | orchestrator | changed: [testbed-manager] 2026-03-08 01:06:29.745577 | orchestrator | 2026-03-08 01:06:29.745583 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-03-08 01:06:29.745589 | orchestrator | Sunday 08 March 2026 01:06:04 +0000 (0:00:07.226) 0:02:39.559 ********** 2026-03-08 01:06:29.745595 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:06:29.745600 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:06:29.745607 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:06:29.745613 | orchestrator | 2026-03-08 01:06:29.745619 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-03-08 01:06:29.745625 | orchestrator | Sunday 08 March 2026 01:06:16 +0000 (0:00:11.554) 0:02:51.113 ********** 2026-03-08 01:06:29.745629 | orchestrator | changed: [testbed-manager] 2026-03-08 01:06:29.745632 | orchestrator | 2026-03-08 01:06:29.745636 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-03-08 01:06:29.745640 | orchestrator | Sunday 08 March 2026 01:06:21 +0000 (0:00:04.714) 0:02:55.828 ********** 2026-03-08 01:06:29.745644 | orchestrator | changed: [testbed-node-3] 2026-03-08 01:06:29.745648 | orchestrator | changed: [testbed-node-4] 2026-03-08 01:06:29.745651 | orchestrator | changed: [testbed-node-5] 2026-03-08 01:06:29.745655 | orchestrator | 2026-03-08 01:06:29.745659 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 01:06:29.745663 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-08 01:06:29.745672 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-08 01:06:29.745676 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-08 01:06:29.745680 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-08 01:06:29.745684 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-08 01:06:29.745690 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-08 01:06:29.745694 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-08 01:06:29.745698 | orchestrator | 2026-03-08 01:06:29.745701 | orchestrator | 2026-03-08 01:06:29.745706 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 01:06:29.745713 | orchestrator | Sunday 08 March 2026 01:06:27 +0000 (0:00:06.605) 0:03:02.434 ********** 2026-03-08 01:06:29.745719 | orchestrator | =============================================================================== 2026-03-08 01:06:29.745726 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 25.52s 2026-03-08 01:06:29.745732 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 17.87s 2026-03-08 01:06:29.745738 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 17.28s 2026-03-08 01:06:29.745743 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 14.89s 2026-03-08 01:06:29.745749 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 14.43s 2026-03-08 01:06:29.745755 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 11.55s 2026-03-08 01:06:29.745761 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 7.23s 2026-03-08 01:06:29.745767 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ------------ 6.70s 2026-03-08 01:06:29.745774 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container -------------- 6.61s 2026-03-08 01:06:29.745781 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.56s 2026-03-08 01:06:29.745787 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container --------------- 5.88s 2026-03-08 01:06:29.745793 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.80s 2026-03-08 01:06:29.745800 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 4.71s 2026-03-08 01:06:29.745808 | orchestrator | prometheus : Check prometheus containers -------------------------------- 3.91s 2026-03-08 01:06:29.745812 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.51s 2026-03-08 01:06:29.745816 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 3.43s 2026-03-08 01:06:29.745819 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 2.32s 2026-03-08 01:06:29.745823 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.25s 2026-03-08 01:06:29.745827 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 1.91s 2026-03-08 01:06:29.745830 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS certificate --- 1.63s 2026-03-08 01:06:29.745834 | orchestrator | 2026-03-08 01:06:29 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:06:32.793719 | orchestrator | 2026-03-08 01:06:32 | INFO  | Task a616cf87-0e11-4eb2-b8e5-9c2348dde5f0 is in state STARTED 2026-03-08 01:06:32.793798 | orchestrator | 2026-03-08 01:06:32 | INFO  | Task 85f31e5b-4889-4820-908c-e206d9d1f706 is in state STARTED 2026-03-08 01:06:32.793833 | orchestrator | 2026-03-08 01:06:32 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:06:32.793841 | orchestrator | 2026-03-08 01:06:32 | INFO  | Task 4597916f-c18d-4669-9a7f-2423ee4e283a is in state STARTED 2026-03-08 01:06:32.793849 | orchestrator | 2026-03-08 01:06:32 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:06:35.842154 | orchestrator | 2026-03-08 01:06:35 | INFO  | Task a616cf87-0e11-4eb2-b8e5-9c2348dde5f0 is in state STARTED 2026-03-08 01:06:35.843764 | orchestrator | 2026-03-08 01:06:35 | INFO  | Task 85f31e5b-4889-4820-908c-e206d9d1f706 is in state STARTED 2026-03-08 01:06:35.845897 | orchestrator | 2026-03-08 01:06:35 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:06:35.848068 | orchestrator | 2026-03-08 01:06:35 | INFO  | Task 4597916f-c18d-4669-9a7f-2423ee4e283a is in state STARTED 2026-03-08 01:06:35.848117 | orchestrator | 2026-03-08 01:06:35 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:06:38.892557 | orchestrator | 2026-03-08 01:06:38 | INFO  | Task a616cf87-0e11-4eb2-b8e5-9c2348dde5f0 is in state STARTED 2026-03-08 01:06:38.895158 | orchestrator | 2026-03-08 01:06:38 | INFO  | Task 85f31e5b-4889-4820-908c-e206d9d1f706 is in state STARTED 2026-03-08 01:06:38.897274 | orchestrator | 2026-03-08 01:06:38 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:06:38.899117 | orchestrator | 2026-03-08 01:06:38 | INFO  | Task 4597916f-c18d-4669-9a7f-2423ee4e283a is in state STARTED 2026-03-08 01:06:38.899363 | orchestrator | 2026-03-08 01:06:38 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:06:41.944909 | orchestrator | 2026-03-08 01:06:41 | INFO  | Task a616cf87-0e11-4eb2-b8e5-9c2348dde5f0 is in state STARTED 2026-03-08 01:06:41.945007 | orchestrator | 2026-03-08 01:06:41 | INFO  | Task 85f31e5b-4889-4820-908c-e206d9d1f706 is in state STARTED 2026-03-08 01:06:41.946839 | orchestrator | 2026-03-08 01:06:41 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:06:41.949002 | orchestrator | 2026-03-08 01:06:41 | INFO  | Task 4597916f-c18d-4669-9a7f-2423ee4e283a is in state STARTED 2026-03-08 01:06:41.949113 | orchestrator | 2026-03-08 01:06:41 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:06:45.040924 | orchestrator | 2026-03-08 01:06:45 | INFO  | Task a616cf87-0e11-4eb2-b8e5-9c2348dde5f0 is in state STARTED 2026-03-08 01:06:45.047513 | orchestrator | 2026-03-08 01:06:45 | INFO  | Task 85f31e5b-4889-4820-908c-e206d9d1f706 is in state STARTED 2026-03-08 01:06:45.047563 | orchestrator | 2026-03-08 01:06:45 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:06:45.050791 | orchestrator | 2026-03-08 01:06:45 | INFO  | Task 4597916f-c18d-4669-9a7f-2423ee4e283a is in state STARTED 2026-03-08 01:06:45.051332 | orchestrator | 2026-03-08 01:06:45 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:06:48.098594 | orchestrator | 2026-03-08 01:06:48 | INFO  | Task a616cf87-0e11-4eb2-b8e5-9c2348dde5f0 is in state STARTED 2026-03-08 01:06:48.100704 | orchestrator | 2026-03-08 01:06:48 | INFO  | Task 85f31e5b-4889-4820-908c-e206d9d1f706 is in state STARTED 2026-03-08 01:06:48.102963 | orchestrator | 2026-03-08 01:06:48 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:06:48.104379 | orchestrator | 2026-03-08 01:06:48 | INFO  | Task 4597916f-c18d-4669-9a7f-2423ee4e283a is in state STARTED 2026-03-08 01:06:48.104412 | orchestrator | 2026-03-08 01:06:48 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:06:51.143787 | orchestrator | 2026-03-08 01:06:51 | INFO  | Task a616cf87-0e11-4eb2-b8e5-9c2348dde5f0 is in state STARTED 2026-03-08 01:06:51.145990 | orchestrator | 2026-03-08 01:06:51 | INFO  | Task 85f31e5b-4889-4820-908c-e206d9d1f706 is in state STARTED 2026-03-08 01:06:51.148989 | orchestrator | 2026-03-08 01:06:51 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:06:51.151869 | orchestrator | 2026-03-08 01:06:51 | INFO  | Task 4597916f-c18d-4669-9a7f-2423ee4e283a is in state STARTED 2026-03-08 01:06:51.151926 | orchestrator | 2026-03-08 01:06:51 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:06:54.210527 | orchestrator | 2026-03-08 01:06:54 | INFO  | Task a616cf87-0e11-4eb2-b8e5-9c2348dde5f0 is in state STARTED 2026-03-08 01:06:54.213185 | orchestrator | 2026-03-08 01:06:54 | INFO  | Task 85f31e5b-4889-4820-908c-e206d9d1f706 is in state STARTED 2026-03-08 01:06:54.214768 | orchestrator | 2026-03-08 01:06:54 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:06:54.216071 | orchestrator | 2026-03-08 01:06:54 | INFO  | Task 4597916f-c18d-4669-9a7f-2423ee4e283a is in state STARTED 2026-03-08 01:06:54.216109 | orchestrator | 2026-03-08 01:06:54 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:06:57.268548 | orchestrator | 2026-03-08 01:06:57 | INFO  | Task a616cf87-0e11-4eb2-b8e5-9c2348dde5f0 is in state STARTED 2026-03-08 01:06:57.271471 | orchestrator | 2026-03-08 01:06:57 | INFO  | Task 85f31e5b-4889-4820-908c-e206d9d1f706 is in state STARTED 2026-03-08 01:06:57.272869 | orchestrator | 2026-03-08 01:06:57 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:06:57.275390 | orchestrator | 2026-03-08 01:06:57 | INFO  | Task 4597916f-c18d-4669-9a7f-2423ee4e283a is in state STARTED 2026-03-08 01:06:57.275490 | orchestrator | 2026-03-08 01:06:57 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:07:00.324686 | orchestrator | 2026-03-08 01:07:00 | INFO  | Task a616cf87-0e11-4eb2-b8e5-9c2348dde5f0 is in state STARTED 2026-03-08 01:07:00.326008 | orchestrator | 2026-03-08 01:07:00 | INFO  | Task 85f31e5b-4889-4820-908c-e206d9d1f706 is in state STARTED 2026-03-08 01:07:00.327076 | orchestrator | 2026-03-08 01:07:00 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:07:00.328317 | orchestrator | 2026-03-08 01:07:00 | INFO  | Task 4597916f-c18d-4669-9a7f-2423ee4e283a is in state STARTED 2026-03-08 01:07:00.328348 | orchestrator | 2026-03-08 01:07:00 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:07:03.375395 | orchestrator | 2026-03-08 01:07:03 | INFO  | Task a616cf87-0e11-4eb2-b8e5-9c2348dde5f0 is in state STARTED 2026-03-08 01:07:03.377876 | orchestrator | 2026-03-08 01:07:03 | INFO  | Task 85f31e5b-4889-4820-908c-e206d9d1f706 is in state STARTED 2026-03-08 01:07:03.380246 | orchestrator | 2026-03-08 01:07:03 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:07:03.382346 | orchestrator | 2026-03-08 01:07:03 | INFO  | Task 4597916f-c18d-4669-9a7f-2423ee4e283a is in state STARTED 2026-03-08 01:07:03.382660 | orchestrator | 2026-03-08 01:07:03 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:07:06.414683 | orchestrator | 2026-03-08 01:07:06 | INFO  | Task a616cf87-0e11-4eb2-b8e5-9c2348dde5f0 is in state STARTED 2026-03-08 01:07:06.415467 | orchestrator | 2026-03-08 01:07:06 | INFO  | Task 85f31e5b-4889-4820-908c-e206d9d1f706 is in state STARTED 2026-03-08 01:07:06.416599 | orchestrator | 2026-03-08 01:07:06 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:07:06.417754 | orchestrator | 2026-03-08 01:07:06 | INFO  | Task 4597916f-c18d-4669-9a7f-2423ee4e283a is in state STARTED 2026-03-08 01:07:06.418038 | orchestrator | 2026-03-08 01:07:06 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:07:09.446529 | orchestrator | 2026-03-08 01:07:09 | INFO  | Task a616cf87-0e11-4eb2-b8e5-9c2348dde5f0 is in state STARTED 2026-03-08 01:07:09.447247 | orchestrator | 2026-03-08 01:07:09 | INFO  | Task 85f31e5b-4889-4820-908c-e206d9d1f706 is in state STARTED 2026-03-08 01:07:09.448039 | orchestrator | 2026-03-08 01:07:09 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:07:09.449064 | orchestrator | 2026-03-08 01:07:09 | INFO  | Task 4597916f-c18d-4669-9a7f-2423ee4e283a is in state STARTED 2026-03-08 01:07:09.449183 | orchestrator | 2026-03-08 01:07:09 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:07:12.480066 | orchestrator | 2026-03-08 01:07:12 | INFO  | Task a616cf87-0e11-4eb2-b8e5-9c2348dde5f0 is in state STARTED 2026-03-08 01:07:12.480767 | orchestrator | 2026-03-08 01:07:12 | INFO  | Task 85f31e5b-4889-4820-908c-e206d9d1f706 is in state STARTED 2026-03-08 01:07:12.481583 | orchestrator | 2026-03-08 01:07:12 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:07:12.483781 | orchestrator | 2026-03-08 01:07:12 | INFO  | Task 4597916f-c18d-4669-9a7f-2423ee4e283a is in state STARTED 2026-03-08 01:07:12.483886 | orchestrator | 2026-03-08 01:07:12 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:07:15.575205 | orchestrator | 2026-03-08 01:07:15 | INFO  | Task a616cf87-0e11-4eb2-b8e5-9c2348dde5f0 is in state STARTED 2026-03-08 01:07:15.578574 | orchestrator | 2026-03-08 01:07:15 | INFO  | Task 85f31e5b-4889-4820-908c-e206d9d1f706 is in state STARTED 2026-03-08 01:07:15.582156 | orchestrator | 2026-03-08 01:07:15 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:07:15.585217 | orchestrator | 2026-03-08 01:07:15 | INFO  | Task 4597916f-c18d-4669-9a7f-2423ee4e283a is in state STARTED 2026-03-08 01:07:15.585883 | orchestrator | 2026-03-08 01:07:15 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:07:18.622642 | orchestrator | 2026-03-08 01:07:18 | INFO  | Task a616cf87-0e11-4eb2-b8e5-9c2348dde5f0 is in state STARTED 2026-03-08 01:07:18.623291 | orchestrator | 2026-03-08 01:07:18 | INFO  | Task 85f31e5b-4889-4820-908c-e206d9d1f706 is in state STARTED 2026-03-08 01:07:18.624218 | orchestrator | 2026-03-08 01:07:18 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:07:18.625155 | orchestrator | 2026-03-08 01:07:18 | INFO  | Task 4597916f-c18d-4669-9a7f-2423ee4e283a is in state STARTED 2026-03-08 01:07:18.625318 | orchestrator | 2026-03-08 01:07:18 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:07:21.659021 | orchestrator | 2026-03-08 01:07:21 | INFO  | Task a616cf87-0e11-4eb2-b8e5-9c2348dde5f0 is in state STARTED 2026-03-08 01:07:21.659582 | orchestrator | 2026-03-08 01:07:21 | INFO  | Task 85f31e5b-4889-4820-908c-e206d9d1f706 is in state STARTED 2026-03-08 01:07:21.660729 | orchestrator | 2026-03-08 01:07:21 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:07:21.665385 | orchestrator | 2026-03-08 01:07:21 | INFO  | Task 4597916f-c18d-4669-9a7f-2423ee4e283a is in state STARTED 2026-03-08 01:07:21.665428 | orchestrator | 2026-03-08 01:07:21 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:07:24.713244 | orchestrator | 2026-03-08 01:07:24 | INFO  | Task a616cf87-0e11-4eb2-b8e5-9c2348dde5f0 is in state STARTED 2026-03-08 01:07:24.714854 | orchestrator | 2026-03-08 01:07:24 | INFO  | Task 85f31e5b-4889-4820-908c-e206d9d1f706 is in state STARTED 2026-03-08 01:07:24.716889 | orchestrator | 2026-03-08 01:07:24 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:07:24.720214 | orchestrator | 2026-03-08 01:07:24 | INFO  | Task 4597916f-c18d-4669-9a7f-2423ee4e283a is in state STARTED 2026-03-08 01:07:24.720264 | orchestrator | 2026-03-08 01:07:24 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:07:27.766164 | orchestrator | 2026-03-08 01:07:27 | INFO  | Task a616cf87-0e11-4eb2-b8e5-9c2348dde5f0 is in state STARTED 2026-03-08 01:07:27.768235 | orchestrator | 2026-03-08 01:07:27 | INFO  | Task 85f31e5b-4889-4820-908c-e206d9d1f706 is in state STARTED 2026-03-08 01:07:27.770385 | orchestrator | 2026-03-08 01:07:27 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:07:27.771810 | orchestrator | 2026-03-08 01:07:27 | INFO  | Task 4597916f-c18d-4669-9a7f-2423ee4e283a is in state STARTED 2026-03-08 01:07:27.771857 | orchestrator | 2026-03-08 01:07:27 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:07:30.816590 | orchestrator | 2026-03-08 01:07:30 | INFO  | Task a616cf87-0e11-4eb2-b8e5-9c2348dde5f0 is in state STARTED 2026-03-08 01:07:30.818375 | orchestrator | 2026-03-08 01:07:30 | INFO  | Task 85f31e5b-4889-4820-908c-e206d9d1f706 is in state STARTED 2026-03-08 01:07:30.820696 | orchestrator | 2026-03-08 01:07:30 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:07:30.822859 | orchestrator | 2026-03-08 01:07:30 | INFO  | Task 4597916f-c18d-4669-9a7f-2423ee4e283a is in state STARTED 2026-03-08 01:07:30.822933 | orchestrator | 2026-03-08 01:07:30 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:07:33.868809 | orchestrator | 2026-03-08 01:07:33 | INFO  | Task a616cf87-0e11-4eb2-b8e5-9c2348dde5f0 is in state STARTED 2026-03-08 01:07:33.871165 | orchestrator | 2026-03-08 01:07:33 | INFO  | Task 85f31e5b-4889-4820-908c-e206d9d1f706 is in state STARTED 2026-03-08 01:07:33.874464 | orchestrator | 2026-03-08 01:07:33 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:07:33.876701 | orchestrator | 2026-03-08 01:07:33 | INFO  | Task 4597916f-c18d-4669-9a7f-2423ee4e283a is in state STARTED 2026-03-08 01:07:33.876985 | orchestrator | 2026-03-08 01:07:33 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:07:36.931234 | orchestrator | 2026-03-08 01:07:36 | INFO  | Task a616cf87-0e11-4eb2-b8e5-9c2348dde5f0 is in state STARTED 2026-03-08 01:07:36.932722 | orchestrator | 2026-03-08 01:07:36 | INFO  | Task 85f31e5b-4889-4820-908c-e206d9d1f706 is in state STARTED 2026-03-08 01:07:36.934839 | orchestrator | 2026-03-08 01:07:36 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:07:36.936186 | orchestrator | 2026-03-08 01:07:36 | INFO  | Task 4597916f-c18d-4669-9a7f-2423ee4e283a is in state STARTED 2026-03-08 01:07:36.936217 | orchestrator | 2026-03-08 01:07:36 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:07:39.976812 | orchestrator | 2026-03-08 01:07:39 | INFO  | Task a616cf87-0e11-4eb2-b8e5-9c2348dde5f0 is in state STARTED 2026-03-08 01:07:39.976930 | orchestrator | 2026-03-08 01:07:39 | INFO  | Task 85f31e5b-4889-4820-908c-e206d9d1f706 is in state STARTED 2026-03-08 01:07:39.977672 | orchestrator | 2026-03-08 01:07:39 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:07:39.979221 | orchestrator | 2026-03-08 01:07:39 | INFO  | Task 587947ef-f4c8-46a7-981b-d61b006420d0 is in state STARTED 2026-03-08 01:07:39.984666 | orchestrator | 2026-03-08 01:07:39 | INFO  | Task 4597916f-c18d-4669-9a7f-2423ee4e283a is in state SUCCESS 2026-03-08 01:07:39.985948 | orchestrator | 2026-03-08 01:07:39.985997 | orchestrator | 2026-03-08 01:07:39.986006 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-08 01:07:39.986038 | orchestrator | 2026-03-08 01:07:39.986046 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-08 01:07:39.986053 | orchestrator | Sunday 08 March 2026 01:04:40 +0000 (0:00:00.323) 0:00:00.323 ********** 2026-03-08 01:07:39.986061 | orchestrator | ok: [testbed-node-0] 2026-03-08 01:07:39.986068 | orchestrator | ok: [testbed-node-1] 2026-03-08 01:07:39.986075 | orchestrator | ok: [testbed-node-2] 2026-03-08 01:07:39.986082 | orchestrator | 2026-03-08 01:07:39.986088 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-08 01:07:39.986094 | orchestrator | Sunday 08 March 2026 01:04:40 +0000 (0:00:00.328) 0:00:00.651 ********** 2026-03-08 01:07:39.986101 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-03-08 01:07:39.986108 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-03-08 01:07:39.986115 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-03-08 01:07:39.986122 | orchestrator | 2026-03-08 01:07:39.986128 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-03-08 01:07:39.986135 | orchestrator | 2026-03-08 01:07:39.986142 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-08 01:07:39.986157 | orchestrator | Sunday 08 March 2026 01:04:40 +0000 (0:00:00.410) 0:00:01.061 ********** 2026-03-08 01:07:39.986163 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 01:07:39.986169 | orchestrator | 2026-03-08 01:07:39.986175 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2026-03-08 01:07:39.986181 | orchestrator | Sunday 08 March 2026 01:04:41 +0000 (0:00:00.999) 0:00:02.060 ********** 2026-03-08 01:07:39.986188 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-03-08 01:07:39.986194 | orchestrator | 2026-03-08 01:07:39.986200 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2026-03-08 01:07:39.986206 | orchestrator | Sunday 08 March 2026 01:04:45 +0000 (0:00:03.712) 0:00:05.773 ********** 2026-03-08 01:07:39.986212 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-03-08 01:07:39.986219 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-03-08 01:07:39.986226 | orchestrator | 2026-03-08 01:07:39.986233 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-03-08 01:07:39.986240 | orchestrator | Sunday 08 March 2026 01:04:51 +0000 (0:00:06.151) 0:00:11.924 ********** 2026-03-08 01:07:39.986247 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-08 01:07:39.986254 | orchestrator | 2026-03-08 01:07:39.986261 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-03-08 01:07:39.986268 | orchestrator | Sunday 08 March 2026 01:04:54 +0000 (0:00:03.249) 0:00:15.174 ********** 2026-03-08 01:07:39.986275 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-03-08 01:07:39.986281 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-08 01:07:39.986288 | orchestrator | 2026-03-08 01:07:39.986295 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-03-08 01:07:39.986302 | orchestrator | Sunday 08 March 2026 01:04:58 +0000 (0:00:03.521) 0:00:18.695 ********** 2026-03-08 01:07:39.986309 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-08 01:07:39.986316 | orchestrator | 2026-03-08 01:07:39.986322 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2026-03-08 01:07:39.986329 | orchestrator | Sunday 08 March 2026 01:05:01 +0000 (0:00:03.321) 0:00:22.016 ********** 2026-03-08 01:07:39.986336 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-03-08 01:07:39.986343 | orchestrator | 2026-03-08 01:07:39.986371 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-03-08 01:07:39.986383 | orchestrator | Sunday 08 March 2026 01:05:05 +0000 (0:00:03.594) 0:00:25.611 ********** 2026-03-08 01:07:39.986405 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-08 01:07:39.986418 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-08 01:07:39.986425 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-08 01:07:39.986437 | orchestrator | 2026-03-08 01:07:39.986443 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-08 01:07:39.986449 | orchestrator | Sunday 08 March 2026 01:05:09 +0000 (0:00:04.000) 0:00:29.611 ********** 2026-03-08 01:07:39.986456 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 01:07:39.986462 | orchestrator | 2026-03-08 01:07:39.986469 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-03-08 01:07:39.986480 | orchestrator | Sunday 08 March 2026 01:05:10 +0000 (0:00:00.681) 0:00:30.293 ********** 2026-03-08 01:07:39.986486 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:07:39.986493 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:07:39.986499 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:07:39.986505 | orchestrator | 2026-03-08 01:07:39.986512 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-03-08 01:07:39.986518 | orchestrator | Sunday 08 March 2026 01:05:14 +0000 (0:00:04.855) 0:00:35.148 ********** 2026-03-08 01:07:39.986525 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-08 01:07:39.986532 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-08 01:07:39.986539 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-08 01:07:39.986545 | orchestrator | 2026-03-08 01:07:39.986552 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-03-08 01:07:39.986558 | orchestrator | Sunday 08 March 2026 01:05:17 +0000 (0:00:02.299) 0:00:37.457 ********** 2026-03-08 01:07:39.986566 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-08 01:07:39.986575 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-08 01:07:39.986582 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-08 01:07:39.986590 | orchestrator | 2026-03-08 01:07:39.986596 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-03-08 01:07:39.986603 | orchestrator | Sunday 08 March 2026 01:05:19 +0000 (0:00:02.185) 0:00:39.643 ********** 2026-03-08 01:07:39.986637 | orchestrator | ok: [testbed-node-0] 2026-03-08 01:07:39.986643 | orchestrator | ok: [testbed-node-1] 2026-03-08 01:07:39.986650 | orchestrator | ok: [testbed-node-2] 2026-03-08 01:07:39.986657 | orchestrator | 2026-03-08 01:07:39.986663 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-03-08 01:07:39.986670 | orchestrator | Sunday 08 March 2026 01:05:20 +0000 (0:00:01.549) 0:00:41.193 ********** 2026-03-08 01:07:39.986677 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:07:39.986689 | orchestrator | 2026-03-08 01:07:39.986697 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-03-08 01:07:39.986704 | orchestrator | Sunday 08 March 2026 01:05:21 +0000 (0:00:00.179) 0:00:41.372 ********** 2026-03-08 01:07:39.986710 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:07:39.986716 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:07:39.986722 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:07:39.986728 | orchestrator | 2026-03-08 01:07:39.986735 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-08 01:07:39.986743 | orchestrator | Sunday 08 March 2026 01:05:21 +0000 (0:00:00.738) 0:00:42.110 ********** 2026-03-08 01:07:39.986750 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 01:07:39.986757 | orchestrator | 2026-03-08 01:07:39.986763 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-03-08 01:07:39.986770 | orchestrator | Sunday 08 March 2026 01:05:22 +0000 (0:00:00.527) 0:00:42.638 ********** 2026-03-08 01:07:39.986777 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-08 01:07:39.986792 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-08 01:07:39.986805 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-08 01:07:39.986812 | orchestrator | 2026-03-08 01:07:39.986818 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-03-08 01:07:39.986824 | orchestrator | Sunday 08 March 2026 01:05:27 +0000 (0:00:04.980) 0:00:47.619 ********** 2026-03-08 01:07:39.986838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-08 01:07:39.986849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-08 01:07:39.986856 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:07:39.986862 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:07:39.986873 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-08 01:07:39.986880 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:07:39.986886 | orchestrator | 2026-03-08 01:07:39.986892 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-03-08 01:07:39.986898 | orchestrator | Sunday 08 March 2026 01:05:31 +0000 (0:00:03.768) 0:00:51.387 ********** 2026-03-08 01:07:39.986911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-08 01:07:39.986921 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:07:39.986928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-08 01:07:39.986934 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:07:39.986947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-08 01:07:39.986958 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:07:39.986964 | orchestrator | 2026-03-08 01:07:39.986970 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-03-08 01:07:39.986977 | orchestrator | Sunday 08 March 2026 01:05:35 +0000 (0:00:04.116) 0:00:55.504 ********** 2026-03-08 01:07:39.986984 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:07:39.986990 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:07:39.986996 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:07:39.987002 | orchestrator | 2026-03-08 01:07:39.987008 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-03-08 01:07:39.987015 | orchestrator | Sunday 08 March 2026 01:05:39 +0000 (0:00:04.260) 0:00:59.765 ********** 2026-03-08 01:07:39.987021 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-08 01:07:39.987035 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-08 01:07:39.987046 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-08 01:07:39.987053 | orchestrator | 2026-03-08 01:07:39.987058 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-03-08 01:07:39.987065 | orchestrator | Sunday 08 March 2026 01:05:44 +0000 (0:00:05.237) 0:01:05.003 ********** 2026-03-08 01:07:39.987071 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:07:39.987078 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:07:39.987085 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:07:39.987091 | orchestrator | 2026-03-08 01:07:39.987097 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-03-08 01:07:39.987103 | orchestrator | Sunday 08 March 2026 01:05:54 +0000 (0:00:09.297) 0:01:14.302 ********** 2026-03-08 01:07:39.987109 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:07:39.987115 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:07:39.987121 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:07:39.987128 | orchestrator | 2026-03-08 01:07:39.987134 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2026-03-08 01:07:39.987140 | orchestrator | Sunday 08 March 2026 01:05:58 +0000 (0:00:04.821) 0:01:19.124 ********** 2026-03-08 01:07:39.987147 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:07:39.987153 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:07:39.987158 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:07:39.987168 | orchestrator | 2026-03-08 01:07:39.987175 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-03-08 01:07:39.987181 | orchestrator | Sunday 08 March 2026 01:06:04 +0000 (0:00:05.158) 0:01:24.282 ********** 2026-03-08 01:07:39.987187 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:07:39.987292 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:07:39.987301 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:07:39.987307 | orchestrator | 2026-03-08 01:07:39.987313 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-03-08 01:07:39.987319 | orchestrator | Sunday 08 March 2026 01:06:09 +0000 (0:00:05.410) 0:01:29.693 ********** 2026-03-08 01:07:39.987325 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:07:39.987331 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:07:39.987337 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:07:39.987344 | orchestrator | 2026-03-08 01:07:39.987350 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-03-08 01:07:39.987356 | orchestrator | Sunday 08 March 2026 01:06:12 +0000 (0:00:03.402) 0:01:33.096 ********** 2026-03-08 01:07:39.987363 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:07:39.987369 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:07:39.987375 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:07:39.987380 | orchestrator | 2026-03-08 01:07:39.987387 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-03-08 01:07:39.987393 | orchestrator | Sunday 08 March 2026 01:06:13 +0000 (0:00:00.377) 0:01:33.474 ********** 2026-03-08 01:07:39.987402 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-08 01:07:39.987409 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:07:39.987415 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-08 01:07:39.987421 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:07:39.987427 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-08 01:07:39.987433 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:07:39.987440 | orchestrator | 2026-03-08 01:07:39.987446 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-03-08 01:07:39.987452 | orchestrator | Sunday 08 March 2026 01:06:17 +0000 (0:00:03.812) 0:01:37.287 ********** 2026-03-08 01:07:39.987458 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:07:39.987465 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:07:39.987470 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:07:39.987476 | orchestrator | 2026-03-08 01:07:39.987482 | orchestrator | TASK [glance : Check glance containers] **************************************** 2026-03-08 01:07:39.987488 | orchestrator | Sunday 08 March 2026 01:06:21 +0000 (0:00:04.768) 0:01:42.055 ********** 2026-03-08 01:07:39.987495 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-08 01:07:39.987513 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-08 01:07:39.987521 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-08 01:07:39.987528 | orchestrator | 2026-03-08 01:07:39.987534 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-08 01:07:39.987540 | orchestrator | Sunday 08 March 2026 01:06:26 +0000 (0:00:04.600) 0:01:46.656 ********** 2026-03-08 01:07:39.987551 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:07:39.987557 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:07:39.987564 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:07:39.987570 | orchestrator | 2026-03-08 01:07:39.987576 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-03-08 01:07:39.987582 | orchestrator | Sunday 08 March 2026 01:06:26 +0000 (0:00:00.282) 0:01:46.939 ********** 2026-03-08 01:07:39.987588 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:07:39.987593 | orchestrator | 2026-03-08 01:07:39.987600 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-03-08 01:07:39.987617 | orchestrator | Sunday 08 March 2026 01:06:28 +0000 (0:00:02.238) 0:01:49.177 ********** 2026-03-08 01:07:39.987623 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:07:39.987630 | orchestrator | 2026-03-08 01:07:39.987636 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-03-08 01:07:39.987643 | orchestrator | Sunday 08 March 2026 01:06:31 +0000 (0:00:02.117) 0:01:51.295 ********** 2026-03-08 01:07:39.987649 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:07:39.987655 | orchestrator | 2026-03-08 01:07:39.987661 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-03-08 01:07:39.987667 | orchestrator | Sunday 08 March 2026 01:06:33 +0000 (0:00:02.244) 0:01:53.540 ********** 2026-03-08 01:07:39.987673 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:07:39.987679 | orchestrator | 2026-03-08 01:07:39.987685 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-03-08 01:07:39.987691 | orchestrator | Sunday 08 March 2026 01:07:01 +0000 (0:00:27.768) 0:02:21.308 ********** 2026-03-08 01:07:39.987697 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:07:39.987704 | orchestrator | 2026-03-08 01:07:39.987710 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-08 01:07:39.987716 | orchestrator | Sunday 08 March 2026 01:07:03 +0000 (0:00:02.477) 0:02:23.786 ********** 2026-03-08 01:07:39.987722 | orchestrator | 2026-03-08 01:07:39.987732 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-08 01:07:39.987738 | orchestrator | Sunday 08 March 2026 01:07:03 +0000 (0:00:00.063) 0:02:23.849 ********** 2026-03-08 01:07:39.987743 | orchestrator | 2026-03-08 01:07:39.987749 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-08 01:07:39.987755 | orchestrator | Sunday 08 March 2026 01:07:03 +0000 (0:00:00.058) 0:02:23.908 ********** 2026-03-08 01:07:39.987762 | orchestrator | 2026-03-08 01:07:39.987768 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-03-08 01:07:39.987775 | orchestrator | Sunday 08 March 2026 01:07:03 +0000 (0:00:00.067) 0:02:23.976 ********** 2026-03-08 01:07:39.987781 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:07:39.987787 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:07:39.987794 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:07:39.987800 | orchestrator | 2026-03-08 01:07:39.987806 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 01:07:39.987813 | orchestrator | testbed-node-0 : ok=27  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-08 01:07:39.987823 | orchestrator | testbed-node-1 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-08 01:07:39.987830 | orchestrator | testbed-node-2 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-08 01:07:39.987836 | orchestrator | 2026-03-08 01:07:39.987843 | orchestrator | 2026-03-08 01:07:39.987849 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 01:07:39.987855 | orchestrator | Sunday 08 March 2026 01:07:36 +0000 (0:00:32.991) 0:02:56.968 ********** 2026-03-08 01:07:39.987862 | orchestrator | =============================================================================== 2026-03-08 01:07:39.987871 | orchestrator | glance : Restart glance-api container ---------------------------------- 32.99s 2026-03-08 01:07:39.987877 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 27.77s 2026-03-08 01:07:39.987883 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 9.30s 2026-03-08 01:07:39.987889 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.15s 2026-03-08 01:07:39.987894 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 5.41s 2026-03-08 01:07:39.987900 | orchestrator | glance : Copying over config.json files for services -------------------- 5.24s 2026-03-08 01:07:39.987906 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 5.16s 2026-03-08 01:07:39.987912 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 4.98s 2026-03-08 01:07:39.987918 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 4.86s 2026-03-08 01:07:39.987926 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 4.82s 2026-03-08 01:07:39.987933 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 4.77s 2026-03-08 01:07:39.987940 | orchestrator | glance : Check glance containers ---------------------------------------- 4.60s 2026-03-08 01:07:39.987947 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 4.26s 2026-03-08 01:07:39.987954 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 4.12s 2026-03-08 01:07:39.987961 | orchestrator | glance : Ensuring config directories exist ------------------------------ 4.00s 2026-03-08 01:07:39.987968 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 3.81s 2026-03-08 01:07:39.987975 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 3.77s 2026-03-08 01:07:39.987982 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.71s 2026-03-08 01:07:39.987989 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 3.59s 2026-03-08 01:07:39.987997 | orchestrator | service-ks-register : glance | Creating users --------------------------- 3.52s 2026-03-08 01:07:39.988004 | orchestrator | 2026-03-08 01:07:39 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:07:43.021230 | orchestrator | 2026-03-08 01:07:43 | INFO  | Task a616cf87-0e11-4eb2-b8e5-9c2348dde5f0 is in state STARTED 2026-03-08 01:07:43.022500 | orchestrator | 2026-03-08 01:07:43 | INFO  | Task 85f31e5b-4889-4820-908c-e206d9d1f706 is in state STARTED 2026-03-08 01:07:43.023024 | orchestrator | 2026-03-08 01:07:43 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:07:43.024446 | orchestrator | 2026-03-08 01:07:43 | INFO  | Task 587947ef-f4c8-46a7-981b-d61b006420d0 is in state STARTED 2026-03-08 01:07:43.024512 | orchestrator | 2026-03-08 01:07:43 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:07:46.071854 | orchestrator | 2026-03-08 01:07:46 | INFO  | Task a616cf87-0e11-4eb2-b8e5-9c2348dde5f0 is in state STARTED 2026-03-08 01:07:46.075129 | orchestrator | 2026-03-08 01:07:46 | INFO  | Task 85f31e5b-4889-4820-908c-e206d9d1f706 is in state SUCCESS 2026-03-08 01:07:46.077518 | orchestrator | 2026-03-08 01:07:46.077582 | orchestrator | 2026-03-08 01:07:46.077589 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-08 01:07:46.077594 | orchestrator | 2026-03-08 01:07:46.077598 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-08 01:07:46.077603 | orchestrator | Sunday 08 March 2026 01:04:43 +0000 (0:00:00.237) 0:00:00.237 ********** 2026-03-08 01:07:46.077607 | orchestrator | ok: [testbed-node-0] 2026-03-08 01:07:46.077632 | orchestrator | ok: [testbed-node-1] 2026-03-08 01:07:46.077642 | orchestrator | ok: [testbed-node-2] 2026-03-08 01:07:46.077646 | orchestrator | 2026-03-08 01:07:46.077665 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-08 01:07:46.077695 | orchestrator | Sunday 08 March 2026 01:04:43 +0000 (0:00:00.305) 0:00:00.542 ********** 2026-03-08 01:07:46.077699 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-03-08 01:07:46.077704 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-03-08 01:07:46.077728 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-03-08 01:07:46.077732 | orchestrator | 2026-03-08 01:07:46.077736 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-03-08 01:07:46.077740 | orchestrator | 2026-03-08 01:07:46.077747 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-08 01:07:46.077753 | orchestrator | Sunday 08 March 2026 01:04:44 +0000 (0:00:00.526) 0:00:01.069 ********** 2026-03-08 01:07:46.077759 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 01:07:46.077766 | orchestrator | 2026-03-08 01:07:46.077785 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2026-03-08 01:07:46.077791 | orchestrator | Sunday 08 March 2026 01:04:44 +0000 (0:00:00.648) 0:00:01.718 ********** 2026-03-08 01:07:46.077797 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-03-08 01:07:46.077803 | orchestrator | 2026-03-08 01:07:46.077809 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2026-03-08 01:07:46.077815 | orchestrator | Sunday 08 March 2026 01:04:48 +0000 (0:00:03.263) 0:00:04.981 ********** 2026-03-08 01:07:46.077822 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-03-08 01:07:46.077829 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-03-08 01:07:46.077835 | orchestrator | 2026-03-08 01:07:46.077841 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-03-08 01:07:46.077848 | orchestrator | Sunday 08 March 2026 01:04:54 +0000 (0:00:06.672) 0:00:11.654 ********** 2026-03-08 01:07:46.077854 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-08 01:07:46.077861 | orchestrator | 2026-03-08 01:07:46.077867 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-03-08 01:07:46.077874 | orchestrator | Sunday 08 March 2026 01:04:57 +0000 (0:00:02.925) 0:00:14.579 ********** 2026-03-08 01:07:46.077880 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-03-08 01:07:46.077887 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-08 01:07:46.077893 | orchestrator | 2026-03-08 01:07:46.077898 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-03-08 01:07:46.077904 | orchestrator | Sunday 08 March 2026 01:05:01 +0000 (0:00:03.640) 0:00:18.220 ********** 2026-03-08 01:07:46.077911 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-08 01:07:46.077918 | orchestrator | 2026-03-08 01:07:46.077924 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2026-03-08 01:07:46.077931 | orchestrator | Sunday 08 March 2026 01:05:04 +0000 (0:00:03.400) 0:00:21.621 ********** 2026-03-08 01:07:46.077937 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-03-08 01:07:46.077943 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-03-08 01:07:46.077949 | orchestrator | 2026-03-08 01:07:46.077955 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-03-08 01:07:46.077962 | orchestrator | Sunday 08 March 2026 01:05:12 +0000 (0:00:07.676) 0:00:29.297 ********** 2026-03-08 01:07:46.077970 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-08 01:07:46.077997 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-08 01:07:46.078050 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-08 01:07:46.078100 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-08 01:07:46.078109 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-08 01:07:46.078116 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-08 01:07:46.078129 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-08 01:07:46.078145 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-08 01:07:46.078155 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-08 01:07:46.078163 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-08 01:07:46.078170 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-08 01:07:46.078176 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-08 01:07:46.078186 | orchestrator | 2026-03-08 01:07:46.078193 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-08 01:07:46.078199 | orchestrator | Sunday 08 March 2026 01:05:15 +0000 (0:00:02.656) 0:00:31.953 ********** 2026-03-08 01:07:46.078206 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:07:46.078212 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:07:46.078218 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:07:46.078224 | orchestrator | 2026-03-08 01:07:46.078231 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-08 01:07:46.078237 | orchestrator | Sunday 08 March 2026 01:05:15 +0000 (0:00:00.517) 0:00:32.471 ********** 2026-03-08 01:07:46.078244 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 01:07:46.078250 | orchestrator | 2026-03-08 01:07:46.078257 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-03-08 01:07:46.078263 | orchestrator | Sunday 08 March 2026 01:05:17 +0000 (0:00:01.685) 0:00:34.156 ********** 2026-03-08 01:07:46.078274 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-03-08 01:07:46.078282 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-03-08 01:07:46.078288 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-03-08 01:07:46.078294 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-03-08 01:07:46.078301 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-03-08 01:07:46.078307 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-03-08 01:07:46.078314 | orchestrator | 2026-03-08 01:07:46.078321 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-03-08 01:07:46.078328 | orchestrator | Sunday 08 March 2026 01:05:20 +0000 (0:00:03.413) 0:00:37.570 ********** 2026-03-08 01:07:46.078340 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-08 01:07:46.078362 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-08 01:07:46.078369 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-08 01:07:46.078384 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-08 01:07:46.078395 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-08 01:07:46.078403 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-08 01:07:46.078408 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-08 01:07:46.078415 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-08 01:07:46.078423 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-08 01:07:46.078432 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-08 01:07:46.078436 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-08 01:07:46.078443 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-08 01:07:46.078447 | orchestrator | 2026-03-08 01:07:46.078450 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-03-08 01:07:46.078454 | orchestrator | Sunday 08 March 2026 01:05:25 +0000 (0:00:04.258) 0:00:41.829 ********** 2026-03-08 01:07:46.078458 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-08 01:07:46.078466 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-08 01:07:46.078470 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-08 01:07:46.078474 | orchestrator | 2026-03-08 01:07:46.078478 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-03-08 01:07:46.078481 | orchestrator | Sunday 08 March 2026 01:05:26 +0000 (0:00:01.931) 0:00:43.760 ********** 2026-03-08 01:07:46.078485 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder.keyring) 2026-03-08 01:07:46.078489 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder.keyring) 2026-03-08 01:07:46.078493 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder.keyring) 2026-03-08 01:07:46.078497 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder-backup.keyring) 2026-03-08 01:07:46.078501 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder-backup.keyring) 2026-03-08 01:07:46.078504 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder-backup.keyring) 2026-03-08 01:07:46.078508 | orchestrator | 2026-03-08 01:07:46.078512 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-03-08 01:07:46.078518 | orchestrator | Sunday 08 March 2026 01:05:30 +0000 (0:00:03.178) 0:00:46.939 ********** 2026-03-08 01:07:46.078525 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-03-08 01:07:46.078530 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-03-08 01:07:46.078539 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-03-08 01:07:46.078548 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-03-08 01:07:46.078553 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-03-08 01:07:46.078558 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-03-08 01:07:46.078564 | orchestrator | 2026-03-08 01:07:46.078570 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-03-08 01:07:46.078575 | orchestrator | Sunday 08 March 2026 01:05:31 +0000 (0:00:01.656) 0:00:48.595 ********** 2026-03-08 01:07:46.078582 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:07:46.078587 | orchestrator | 2026-03-08 01:07:46.078593 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-03-08 01:07:46.078600 | orchestrator | Sunday 08 March 2026 01:05:32 +0000 (0:00:00.246) 0:00:48.842 ********** 2026-03-08 01:07:46.078606 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:07:46.078612 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:07:46.078617 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:07:46.078623 | orchestrator | 2026-03-08 01:07:46.078629 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-08 01:07:46.078635 | orchestrator | Sunday 08 March 2026 01:05:32 +0000 (0:00:00.546) 0:00:49.388 ********** 2026-03-08 01:07:46.078641 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 01:07:46.078647 | orchestrator | 2026-03-08 01:07:46.078671 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-03-08 01:07:46.078683 | orchestrator | Sunday 08 March 2026 01:05:33 +0000 (0:00:00.893) 0:00:50.282 ********** 2026-03-08 01:07:46.078690 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-08 01:07:46.078709 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-08 01:07:46.078714 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-08 01:07:46.078718 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-08 01:07:46.078722 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-08 01:07:46.078732 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-08 01:07:46.078739 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-08 01:07:46.078747 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-08 01:07:46.078751 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-08 01:07:46.078755 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-08 01:07:46.078759 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-08 01:07:46.078767 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-08 01:07:46.078774 | orchestrator | 2026-03-08 01:07:46.078778 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-03-08 01:07:46.078782 | orchestrator | Sunday 08 March 2026 01:05:38 +0000 (0:00:04.737) 0:00:55.020 ********** 2026-03-08 01:07:46.078788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-08 01:07:46.078792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-08 01:07:46.078797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-08 01:07:46.078801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-08 01:07:46.078805 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:07:46.078813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-08 01:07:46.078820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-08 01:07:46.078827 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-08 01:07:46.078831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-08 01:07:46.078835 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:07:46.078839 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-08 01:07:46.078843 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-08 01:07:46.078850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-08 01:07:46.078859 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-08 01:07:46.078863 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:07:46.078868 | orchestrator | 2026-03-08 01:07:46.078872 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-03-08 01:07:46.078876 | orchestrator | Sunday 08 March 2026 01:05:39 +0000 (0:00:01.268) 0:00:56.289 ********** 2026-03-08 01:07:46.078880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-08 01:07:46.078884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-08 01:07:46.078888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-08 01:07:46.078894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-08 01:07:46.078903 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:07:46.078912 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-08 01:07:46.078919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-08 01:07:46.078925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-08 01:07:46.078930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-08 01:07:46.078936 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:07:46.078945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-08 01:07:46.078963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-08 01:07:46.078973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-08 01:07:46.078980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-08 01:07:46.078986 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:07:46.078991 | orchestrator | 2026-03-08 01:07:46.078998 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-03-08 01:07:46.079004 | orchestrator | Sunday 08 March 2026 01:05:41 +0000 (0:00:01.973) 0:00:58.262 ********** 2026-03-08 01:07:46.079009 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-08 01:07:46.079015 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-08 01:07:46.079030 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-08 01:07:46.079041 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-08 01:07:46.079047 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-08 01:07:46.079054 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-08 01:07:46.079060 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-08 01:07:46.079071 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-08 01:07:46.079081 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-08 01:07:46.079089 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-08 01:07:46.079096 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-08 01:07:46.079102 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-08 01:07:46.079108 | orchestrator | 2026-03-08 01:07:46.079113 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-03-08 01:07:46.079119 | orchestrator | Sunday 08 March 2026 01:05:46 +0000 (0:00:05.410) 0:01:03.673 ********** 2026-03-08 01:07:46.079126 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-08 01:07:46.079133 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-08 01:07:46.079144 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-08 01:07:46.079149 | orchestrator | 2026-03-08 01:07:46.079155 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-03-08 01:07:46.079161 | orchestrator | Sunday 08 March 2026 01:05:50 +0000 (0:00:03.285) 0:01:06.958 ********** 2026-03-08 01:07:46.079171 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-08 01:07:46.079177 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-08 01:07:46.079186 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-08 01:07:46.079193 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-08 01:07:46.079198 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-08 01:07:46.079208 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-08 01:07:46.079750 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-08 01:07:46.079803 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-08 01:07:46.079824 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-08 01:07:46.079833 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-08 01:07:46.079840 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-08 01:07:46.079860 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-08 01:07:46.079877 | orchestrator | 2026-03-08 01:07:46.079885 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-03-08 01:07:46.079901 | orchestrator | Sunday 08 March 2026 01:06:07 +0000 (0:00:16.853) 0:01:23.812 ********** 2026-03-08 01:07:46.079909 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:07:46.079917 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:07:46.079922 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:07:46.079927 | orchestrator | 2026-03-08 01:07:46.079933 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-03-08 01:07:46.079950 | orchestrator | Sunday 08 March 2026 01:06:09 +0000 (0:00:02.245) 0:01:26.057 ********** 2026-03-08 01:07:46.079957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-08 01:07:46.079969 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-08 01:07:46.079975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-08 01:07:46.079988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-08 01:07:46.079994 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-08 01:07:46.080006 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-08 01:07:46.080012 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:07:46.080018 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-08 01:07:46.080028 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-08 01:07:46.080034 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:07:46.080041 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-08 01:07:46.080052 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-08 01:07:46.080061 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-08 01:07:46.080068 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-08 01:07:46.080072 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:07:46.080076 | orchestrator | 2026-03-08 01:07:46.080080 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-03-08 01:07:46.080084 | orchestrator | Sunday 08 March 2026 01:06:09 +0000 (0:00:00.651) 0:01:26.709 ********** 2026-03-08 01:07:46.080088 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:07:46.080091 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:07:46.080095 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:07:46.080099 | orchestrator | 2026-03-08 01:07:46.080103 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2026-03-08 01:07:46.080107 | orchestrator | Sunday 08 March 2026 01:06:10 +0000 (0:00:00.298) 0:01:27.007 ********** 2026-03-08 01:07:46.080113 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-08 01:07:46.080122 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-08 01:07:46.080126 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-08 01:07:46.080135 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-08 01:07:46.080140 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-08 01:07:46.080147 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-08 01:07:46.080154 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-08 01:07:46.080158 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-08 01:07:46.080162 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-08 01:07:46.080168 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-08 01:07:46.080173 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-08 01:07:46.080179 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-08 01:07:46.080186 | orchestrator | 2026-03-08 01:07:46.080192 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-08 01:07:46.080198 | orchestrator | Sunday 08 March 2026 01:06:13 +0000 (0:00:03.463) 0:01:30.471 ********** 2026-03-08 01:07:46.080207 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:07:46.080215 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:07:46.080224 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:07:46.080229 | orchestrator | 2026-03-08 01:07:46.080235 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-03-08 01:07:46.080240 | orchestrator | Sunday 08 March 2026 01:06:14 +0000 (0:00:00.976) 0:01:31.447 ********** 2026-03-08 01:07:46.080247 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:07:46.080253 | orchestrator | 2026-03-08 01:07:46.080258 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-03-08 01:07:46.080265 | orchestrator | Sunday 08 March 2026 01:06:17 +0000 (0:00:02.387) 0:01:33.835 ********** 2026-03-08 01:07:46.080271 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:07:46.080277 | orchestrator | 2026-03-08 01:07:46.080283 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-03-08 01:07:46.080289 | orchestrator | Sunday 08 March 2026 01:06:19 +0000 (0:00:02.633) 0:01:36.468 ********** 2026-03-08 01:07:46.080294 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:07:46.080300 | orchestrator | 2026-03-08 01:07:46.080307 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-08 01:07:46.080312 | orchestrator | Sunday 08 March 2026 01:06:40 +0000 (0:00:20.482) 0:01:56.951 ********** 2026-03-08 01:07:46.080318 | orchestrator | 2026-03-08 01:07:46.080324 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-08 01:07:46.080331 | orchestrator | Sunday 08 March 2026 01:06:40 +0000 (0:00:00.072) 0:01:57.024 ********** 2026-03-08 01:07:46.080337 | orchestrator | 2026-03-08 01:07:46.080345 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-08 01:07:46.080351 | orchestrator | Sunday 08 March 2026 01:06:40 +0000 (0:00:00.071) 0:01:57.095 ********** 2026-03-08 01:07:46.080357 | orchestrator | 2026-03-08 01:07:46.080364 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-03-08 01:07:46.080370 | orchestrator | Sunday 08 March 2026 01:06:40 +0000 (0:00:00.070) 0:01:57.165 ********** 2026-03-08 01:07:46.080376 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:07:46.080383 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:07:46.080389 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:07:46.080395 | orchestrator | 2026-03-08 01:07:46.080402 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-03-08 01:07:46.080408 | orchestrator | Sunday 08 March 2026 01:07:04 +0000 (0:00:24.187) 0:02:21.353 ********** 2026-03-08 01:07:46.080416 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:07:46.080423 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:07:46.080430 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:07:46.080437 | orchestrator | 2026-03-08 01:07:46.080442 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-03-08 01:07:46.080448 | orchestrator | Sunday 08 March 2026 01:07:11 +0000 (0:00:07.058) 0:02:28.411 ********** 2026-03-08 01:07:46.080453 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:07:46.080460 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:07:46.080466 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:07:46.080471 | orchestrator | 2026-03-08 01:07:46.080477 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-03-08 01:07:46.080483 | orchestrator | Sunday 08 March 2026 01:07:33 +0000 (0:00:22.066) 0:02:50.478 ********** 2026-03-08 01:07:46.080494 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:07:46.080500 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:07:46.080505 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:07:46.080511 | orchestrator | 2026-03-08 01:07:46.080517 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-03-08 01:07:46.080527 | orchestrator | Sunday 08 March 2026 01:07:44 +0000 (0:00:10.721) 0:03:01.199 ********** 2026-03-08 01:07:46.080535 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:07:46.080541 | orchestrator | 2026-03-08 01:07:46.080546 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 01:07:46.080554 | orchestrator | testbed-node-0 : ok=30  changed=22  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-08 01:07:46.080561 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-08 01:07:46.080568 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-08 01:07:46.080573 | orchestrator | 2026-03-08 01:07:46.080581 | orchestrator | 2026-03-08 01:07:46.080586 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 01:07:46.080593 | orchestrator | Sunday 08 March 2026 01:07:44 +0000 (0:00:00.399) 0:03:01.599 ********** 2026-03-08 01:07:46.080600 | orchestrator | =============================================================================== 2026-03-08 01:07:46.080605 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 24.19s 2026-03-08 01:07:46.080611 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 22.07s 2026-03-08 01:07:46.080622 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 20.48s 2026-03-08 01:07:46.080627 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 16.85s 2026-03-08 01:07:46.080636 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 10.72s 2026-03-08 01:07:46.080643 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 7.68s 2026-03-08 01:07:46.080668 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 7.06s 2026-03-08 01:07:46.080675 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.67s 2026-03-08 01:07:46.080681 | orchestrator | cinder : Copying over config.json files for services -------------------- 5.41s 2026-03-08 01:07:46.080687 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 4.74s 2026-03-08 01:07:46.080693 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 4.26s 2026-03-08 01:07:46.080699 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.64s 2026-03-08 01:07:46.080706 | orchestrator | cinder : Check cinder containers ---------------------------------------- 3.46s 2026-03-08 01:07:46.080713 | orchestrator | cinder : Ensuring cinder service ceph config subdirs exists ------------- 3.41s 2026-03-08 01:07:46.080718 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.40s 2026-03-08 01:07:46.080726 | orchestrator | cinder : Copying over cinder-wsgi.conf ---------------------------------- 3.29s 2026-03-08 01:07:46.080734 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.26s 2026-03-08 01:07:46.080740 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.18s 2026-03-08 01:07:46.080746 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 2.93s 2026-03-08 01:07:46.080752 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.66s 2026-03-08 01:07:46.080758 | orchestrator | 2026-03-08 01:07:46 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:07:46.080765 | orchestrator | 2026-03-08 01:07:46 | INFO  | Task 587947ef-f4c8-46a7-981b-d61b006420d0 is in state STARTED 2026-03-08 01:07:46.080800 | orchestrator | 2026-03-08 01:07:46 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:07:49.127522 | orchestrator | 2026-03-08 01:07:49 | INFO  | Task a616cf87-0e11-4eb2-b8e5-9c2348dde5f0 is in state STARTED 2026-03-08 01:07:49.128919 | orchestrator | 2026-03-08 01:07:49 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:07:49.132031 | orchestrator | 2026-03-08 01:07:49 | INFO  | Task 587947ef-f4c8-46a7-981b-d61b006420d0 is in state STARTED 2026-03-08 01:07:49.132153 | orchestrator | 2026-03-08 01:07:49 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:07:52.168139 | orchestrator | 2026-03-08 01:07:52 | INFO  | Task a616cf87-0e11-4eb2-b8e5-9c2348dde5f0 is in state STARTED 2026-03-08 01:07:52.168725 | orchestrator | 2026-03-08 01:07:52 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:07:52.170612 | orchestrator | 2026-03-08 01:07:52 | INFO  | Task 587947ef-f4c8-46a7-981b-d61b006420d0 is in state STARTED 2026-03-08 01:07:52.170660 | orchestrator | 2026-03-08 01:07:52 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:07:55.209554 | orchestrator | 2026-03-08 01:07:55 | INFO  | Task a616cf87-0e11-4eb2-b8e5-9c2348dde5f0 is in state STARTED 2026-03-08 01:07:55.210192 | orchestrator | 2026-03-08 01:07:55 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:07:55.211075 | orchestrator | 2026-03-08 01:07:55 | INFO  | Task 587947ef-f4c8-46a7-981b-d61b006420d0 is in state STARTED 2026-03-08 01:07:55.211522 | orchestrator | 2026-03-08 01:07:55 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:07:58.260505 | orchestrator | 2026-03-08 01:07:58 | INFO  | Task a616cf87-0e11-4eb2-b8e5-9c2348dde5f0 is in state STARTED 2026-03-08 01:07:58.265929 | orchestrator | 2026-03-08 01:07:58 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:07:58.267171 | orchestrator | 2026-03-08 01:07:58 | INFO  | Task 587947ef-f4c8-46a7-981b-d61b006420d0 is in state STARTED 2026-03-08 01:07:58.267230 | orchestrator | 2026-03-08 01:07:58 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:08:01.294938 | orchestrator | 2026-03-08 01:08:01 | INFO  | Task a616cf87-0e11-4eb2-b8e5-9c2348dde5f0 is in state STARTED 2026-03-08 01:08:01.297374 | orchestrator | 2026-03-08 01:08:01 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:08:01.298300 | orchestrator | 2026-03-08 01:08:01 | INFO  | Task 587947ef-f4c8-46a7-981b-d61b006420d0 is in state STARTED 2026-03-08 01:08:01.298481 | orchestrator | 2026-03-08 01:08:01 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:08:04.346677 | orchestrator | 2026-03-08 01:08:04 | INFO  | Task a616cf87-0e11-4eb2-b8e5-9c2348dde5f0 is in state STARTED 2026-03-08 01:08:04.348599 | orchestrator | 2026-03-08 01:08:04 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:08:04.350315 | orchestrator | 2026-03-08 01:08:04 | INFO  | Task 587947ef-f4c8-46a7-981b-d61b006420d0 is in state STARTED 2026-03-08 01:08:04.350548 | orchestrator | 2026-03-08 01:08:04 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:08:07.398099 | orchestrator | 2026-03-08 01:08:07 | INFO  | Task a616cf87-0e11-4eb2-b8e5-9c2348dde5f0 is in state STARTED 2026-03-08 01:08:07.398475 | orchestrator | 2026-03-08 01:08:07 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:08:07.399636 | orchestrator | 2026-03-08 01:08:07 | INFO  | Task 587947ef-f4c8-46a7-981b-d61b006420d0 is in state STARTED 2026-03-08 01:08:07.399954 | orchestrator | 2026-03-08 01:08:07 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:08:10.442323 | orchestrator | 2026-03-08 01:08:10 | INFO  | Task a616cf87-0e11-4eb2-b8e5-9c2348dde5f0 is in state STARTED 2026-03-08 01:08:10.444160 | orchestrator | 2026-03-08 01:08:10 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:08:10.445722 | orchestrator | 2026-03-08 01:08:10 | INFO  | Task 587947ef-f4c8-46a7-981b-d61b006420d0 is in state STARTED 2026-03-08 01:08:10.445776 | orchestrator | 2026-03-08 01:08:10 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:08:13.489165 | orchestrator | 2026-03-08 01:08:13 | INFO  | Task a616cf87-0e11-4eb2-b8e5-9c2348dde5f0 is in state STARTED 2026-03-08 01:08:13.491133 | orchestrator | 2026-03-08 01:08:13 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:08:13.493899 | orchestrator | 2026-03-08 01:08:13 | INFO  | Task 587947ef-f4c8-46a7-981b-d61b006420d0 is in state STARTED 2026-03-08 01:08:13.493978 | orchestrator | 2026-03-08 01:08:13 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:08:16.548849 | orchestrator | 2026-03-08 01:08:16 | INFO  | Task a616cf87-0e11-4eb2-b8e5-9c2348dde5f0 is in state STARTED 2026-03-08 01:08:16.551581 | orchestrator | 2026-03-08 01:08:16 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:08:16.551650 | orchestrator | 2026-03-08 01:08:16 | INFO  | Task 587947ef-f4c8-46a7-981b-d61b006420d0 is in state STARTED 2026-03-08 01:08:16.551932 | orchestrator | 2026-03-08 01:08:16 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:08:19.603921 | orchestrator | 2026-03-08 01:08:19 | INFO  | Task a616cf87-0e11-4eb2-b8e5-9c2348dde5f0 is in state STARTED 2026-03-08 01:08:19.606573 | orchestrator | 2026-03-08 01:08:19 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:08:19.609713 | orchestrator | 2026-03-08 01:08:19 | INFO  | Task 587947ef-f4c8-46a7-981b-d61b006420d0 is in state STARTED 2026-03-08 01:08:19.609872 | orchestrator | 2026-03-08 01:08:19 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:08:22.654119 | orchestrator | 2026-03-08 01:08:22 | INFO  | Task a616cf87-0e11-4eb2-b8e5-9c2348dde5f0 is in state STARTED 2026-03-08 01:08:22.654226 | orchestrator | 2026-03-08 01:08:22 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:08:22.654824 | orchestrator | 2026-03-08 01:08:22 | INFO  | Task 587947ef-f4c8-46a7-981b-d61b006420d0 is in state STARTED 2026-03-08 01:08:22.654876 | orchestrator | 2026-03-08 01:08:22 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:08:25.699633 | orchestrator | 2026-03-08 01:08:25 | INFO  | Task a616cf87-0e11-4eb2-b8e5-9c2348dde5f0 is in state STARTED 2026-03-08 01:08:25.703282 | orchestrator | 2026-03-08 01:08:25 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:08:25.703835 | orchestrator | 2026-03-08 01:08:25 | INFO  | Task 587947ef-f4c8-46a7-981b-d61b006420d0 is in state STARTED 2026-03-08 01:08:25.704041 | orchestrator | 2026-03-08 01:08:25 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:08:28.747276 | orchestrator | 2026-03-08 01:08:28 | INFO  | Task a616cf87-0e11-4eb2-b8e5-9c2348dde5f0 is in state STARTED 2026-03-08 01:08:28.748737 | orchestrator | 2026-03-08 01:08:28 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:08:28.750574 | orchestrator | 2026-03-08 01:08:28 | INFO  | Task 587947ef-f4c8-46a7-981b-d61b006420d0 is in state STARTED 2026-03-08 01:08:28.750619 | orchestrator | 2026-03-08 01:08:28 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:08:31.792050 | orchestrator | 2026-03-08 01:08:31 | INFO  | Task a616cf87-0e11-4eb2-b8e5-9c2348dde5f0 is in state STARTED 2026-03-08 01:08:31.793238 | orchestrator | 2026-03-08 01:08:31 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:08:31.795154 | orchestrator | 2026-03-08 01:08:31 | INFO  | Task 587947ef-f4c8-46a7-981b-d61b006420d0 is in state STARTED 2026-03-08 01:08:31.795222 | orchestrator | 2026-03-08 01:08:31 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:08:34.835947 | orchestrator | 2026-03-08 01:08:34 | INFO  | Task a616cf87-0e11-4eb2-b8e5-9c2348dde5f0 is in state STARTED 2026-03-08 01:08:34.836811 | orchestrator | 2026-03-08 01:08:34 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:08:34.837900 | orchestrator | 2026-03-08 01:08:34 | INFO  | Task 587947ef-f4c8-46a7-981b-d61b006420d0 is in state STARTED 2026-03-08 01:08:34.837934 | orchestrator | 2026-03-08 01:08:34 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:08:37.877272 | orchestrator | 2026-03-08 01:08:37 | INFO  | Task a616cf87-0e11-4eb2-b8e5-9c2348dde5f0 is in state STARTED 2026-03-08 01:08:37.877653 | orchestrator | 2026-03-08 01:08:37 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:08:37.878478 | orchestrator | 2026-03-08 01:08:37 | INFO  | Task 587947ef-f4c8-46a7-981b-d61b006420d0 is in state STARTED 2026-03-08 01:08:37.878520 | orchestrator | 2026-03-08 01:08:37 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:08:40.929250 | orchestrator | 2026-03-08 01:08:40 | INFO  | Task a616cf87-0e11-4eb2-b8e5-9c2348dde5f0 is in state STARTED 2026-03-08 01:08:40.929904 | orchestrator | 2026-03-08 01:08:40 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:08:40.933713 | orchestrator | 2026-03-08 01:08:40 | INFO  | Task 587947ef-f4c8-46a7-981b-d61b006420d0 is in state STARTED 2026-03-08 01:08:40.933772 | orchestrator | 2026-03-08 01:08:40 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:08:43.987966 | orchestrator | 2026-03-08 01:08:43 | INFO  | Task a616cf87-0e11-4eb2-b8e5-9c2348dde5f0 is in state STARTED 2026-03-08 01:08:43.988788 | orchestrator | 2026-03-08 01:08:43 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:08:43.989724 | orchestrator | 2026-03-08 01:08:43 | INFO  | Task 587947ef-f4c8-46a7-981b-d61b006420d0 is in state STARTED 2026-03-08 01:08:43.989749 | orchestrator | 2026-03-08 01:08:43 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:08:47.050276 | orchestrator | 2026-03-08 01:08:47 | INFO  | Task a616cf87-0e11-4eb2-b8e5-9c2348dde5f0 is in state STARTED 2026-03-08 01:08:47.050908 | orchestrator | 2026-03-08 01:08:47 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:08:47.053042 | orchestrator | 2026-03-08 01:08:47 | INFO  | Task 587947ef-f4c8-46a7-981b-d61b006420d0 is in state STARTED 2026-03-08 01:08:47.053156 | orchestrator | 2026-03-08 01:08:47 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:08:50.094711 | orchestrator | 2026-03-08 01:08:50 | INFO  | Task a616cf87-0e11-4eb2-b8e5-9c2348dde5f0 is in state STARTED 2026-03-08 01:08:50.095277 | orchestrator | 2026-03-08 01:08:50 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:08:50.096221 | orchestrator | 2026-03-08 01:08:50 | INFO  | Task 587947ef-f4c8-46a7-981b-d61b006420d0 is in state STARTED 2026-03-08 01:08:50.096247 | orchestrator | 2026-03-08 01:08:50 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:08:53.138331 | orchestrator | 2026-03-08 01:08:53 | INFO  | Task a616cf87-0e11-4eb2-b8e5-9c2348dde5f0 is in state STARTED 2026-03-08 01:08:53.138470 | orchestrator | 2026-03-08 01:08:53 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:08:53.139949 | orchestrator | 2026-03-08 01:08:53 | INFO  | Task 587947ef-f4c8-46a7-981b-d61b006420d0 is in state STARTED 2026-03-08 01:08:53.139986 | orchestrator | 2026-03-08 01:08:53 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:08:56.188251 | orchestrator | 2026-03-08 01:08:56 | INFO  | Task a616cf87-0e11-4eb2-b8e5-9c2348dde5f0 is in state STARTED 2026-03-08 01:08:56.190267 | orchestrator | 2026-03-08 01:08:56 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:08:56.192573 | orchestrator | 2026-03-08 01:08:56 | INFO  | Task 587947ef-f4c8-46a7-981b-d61b006420d0 is in state STARTED 2026-03-08 01:08:56.192758 | orchestrator | 2026-03-08 01:08:56 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:08:59.239674 | orchestrator | 2026-03-08 01:08:59 | INFO  | Task a616cf87-0e11-4eb2-b8e5-9c2348dde5f0 is in state STARTED 2026-03-08 01:08:59.243047 | orchestrator | 2026-03-08 01:08:59 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:08:59.244236 | orchestrator | 2026-03-08 01:08:59 | INFO  | Task 587947ef-f4c8-46a7-981b-d61b006420d0 is in state STARTED 2026-03-08 01:08:59.244279 | orchestrator | 2026-03-08 01:08:59 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:09:02.297107 | orchestrator | 2026-03-08 01:09:02 | INFO  | Task a616cf87-0e11-4eb2-b8e5-9c2348dde5f0 is in state STARTED 2026-03-08 01:09:02.299424 | orchestrator | 2026-03-08 01:09:02 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:09:02.301228 | orchestrator | 2026-03-08 01:09:02 | INFO  | Task 587947ef-f4c8-46a7-981b-d61b006420d0 is in state STARTED 2026-03-08 01:09:02.301269 | orchestrator | 2026-03-08 01:09:02 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:09:05.344693 | orchestrator | 2026-03-08 01:09:05 | INFO  | Task a616cf87-0e11-4eb2-b8e5-9c2348dde5f0 is in state STARTED 2026-03-08 01:09:05.346297 | orchestrator | 2026-03-08 01:09:05 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:09:05.347652 | orchestrator | 2026-03-08 01:09:05 | INFO  | Task 587947ef-f4c8-46a7-981b-d61b006420d0 is in state STARTED 2026-03-08 01:09:05.347707 | orchestrator | 2026-03-08 01:09:05 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:09:08.395186 | orchestrator | 2026-03-08 01:09:08 | INFO  | Task a616cf87-0e11-4eb2-b8e5-9c2348dde5f0 is in state STARTED 2026-03-08 01:09:08.395605 | orchestrator | 2026-03-08 01:09:08 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:09:08.397185 | orchestrator | 2026-03-08 01:09:08 | INFO  | Task 587947ef-f4c8-46a7-981b-d61b006420d0 is in state STARTED 2026-03-08 01:09:08.397387 | orchestrator | 2026-03-08 01:09:08 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:09:11.441474 | orchestrator | 2026-03-08 01:09:11 | INFO  | Task a616cf87-0e11-4eb2-b8e5-9c2348dde5f0 is in state STARTED 2026-03-08 01:09:11.444459 | orchestrator | 2026-03-08 01:09:11 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:09:11.446732 | orchestrator | 2026-03-08 01:09:11 | INFO  | Task 587947ef-f4c8-46a7-981b-d61b006420d0 is in state STARTED 2026-03-08 01:09:11.447302 | orchestrator | 2026-03-08 01:09:11 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:09:14.496050 | orchestrator | 2026-03-08 01:09:14 | INFO  | Task a616cf87-0e11-4eb2-b8e5-9c2348dde5f0 is in state STARTED 2026-03-08 01:09:14.498354 | orchestrator | 2026-03-08 01:09:14 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:09:14.500221 | orchestrator | 2026-03-08 01:09:14 | INFO  | Task 587947ef-f4c8-46a7-981b-d61b006420d0 is in state STARTED 2026-03-08 01:09:14.500298 | orchestrator | 2026-03-08 01:09:14 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:09:17.557392 | orchestrator | 2026-03-08 01:09:17 | INFO  | Task a616cf87-0e11-4eb2-b8e5-9c2348dde5f0 is in state STARTED 2026-03-08 01:09:17.559244 | orchestrator | 2026-03-08 01:09:17 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:09:17.563246 | orchestrator | 2026-03-08 01:09:17 | INFO  | Task 587947ef-f4c8-46a7-981b-d61b006420d0 is in state STARTED 2026-03-08 01:09:17.563323 | orchestrator | 2026-03-08 01:09:17 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:09:20.604565 | orchestrator | 2026-03-08 01:09:20 | INFO  | Task a616cf87-0e11-4eb2-b8e5-9c2348dde5f0 is in state STARTED 2026-03-08 01:09:20.606755 | orchestrator | 2026-03-08 01:09:20 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:09:20.608547 | orchestrator | 2026-03-08 01:09:20 | INFO  | Task 587947ef-f4c8-46a7-981b-d61b006420d0 is in state STARTED 2026-03-08 01:09:20.608594 | orchestrator | 2026-03-08 01:09:20 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:09:23.665550 | orchestrator | 2026-03-08 01:09:23 | INFO  | Task a616cf87-0e11-4eb2-b8e5-9c2348dde5f0 is in state STARTED 2026-03-08 01:09:23.666341 | orchestrator | 2026-03-08 01:09:23 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:09:23.670584 | orchestrator | 2026-03-08 01:09:23 | INFO  | Task 587947ef-f4c8-46a7-981b-d61b006420d0 is in state STARTED 2026-03-08 01:09:23.670636 | orchestrator | 2026-03-08 01:09:23 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:09:26.718841 | orchestrator | 2026-03-08 01:09:26 | INFO  | Task a616cf87-0e11-4eb2-b8e5-9c2348dde5f0 is in state STARTED 2026-03-08 01:09:26.720662 | orchestrator | 2026-03-08 01:09:26 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:09:26.722300 | orchestrator | 2026-03-08 01:09:26 | INFO  | Task 587947ef-f4c8-46a7-981b-d61b006420d0 is in state STARTED 2026-03-08 01:09:26.722435 | orchestrator | 2026-03-08 01:09:26 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:09:29.779740 | orchestrator | 2026-03-08 01:09:29 | INFO  | Task a616cf87-0e11-4eb2-b8e5-9c2348dde5f0 is in state STARTED 2026-03-08 01:09:29.783875 | orchestrator | 2026-03-08 01:09:29 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:09:29.785560 | orchestrator | 2026-03-08 01:09:29 | INFO  | Task 587947ef-f4c8-46a7-981b-d61b006420d0 is in state STARTED 2026-03-08 01:09:29.785591 | orchestrator | 2026-03-08 01:09:29 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:09:32.834888 | orchestrator | 2026-03-08 01:09:32 | INFO  | Task a616cf87-0e11-4eb2-b8e5-9c2348dde5f0 is in state STARTED 2026-03-08 01:09:32.836188 | orchestrator | 2026-03-08 01:09:32 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:09:32.838688 | orchestrator | 2026-03-08 01:09:32 | INFO  | Task 587947ef-f4c8-46a7-981b-d61b006420d0 is in state STARTED 2026-03-08 01:09:32.838787 | orchestrator | 2026-03-08 01:09:32 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:09:35.882424 | orchestrator | 2026-03-08 01:09:35 | INFO  | Task a616cf87-0e11-4eb2-b8e5-9c2348dde5f0 is in state STARTED 2026-03-08 01:09:35.882483 | orchestrator | 2026-03-08 01:09:35 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:09:35.882606 | orchestrator | 2026-03-08 01:09:35 | INFO  | Task 587947ef-f4c8-46a7-981b-d61b006420d0 is in state STARTED 2026-03-08 01:09:35.882868 | orchestrator | 2026-03-08 01:09:35 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:09:38.924918 | orchestrator | 2026-03-08 01:09:38 | INFO  | Task a616cf87-0e11-4eb2-b8e5-9c2348dde5f0 is in state STARTED 2026-03-08 01:09:38.926433 | orchestrator | 2026-03-08 01:09:38 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:09:38.928022 | orchestrator | 2026-03-08 01:09:38 | INFO  | Task 587947ef-f4c8-46a7-981b-d61b006420d0 is in state STARTED 2026-03-08 01:09:38.928054 | orchestrator | 2026-03-08 01:09:38 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:09:41.972025 | orchestrator | 2026-03-08 01:09:41 | INFO  | Task a616cf87-0e11-4eb2-b8e5-9c2348dde5f0 is in state STARTED 2026-03-08 01:09:41.973641 | orchestrator | 2026-03-08 01:09:41 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:09:41.976019 | orchestrator | 2026-03-08 01:09:41 | INFO  | Task 587947ef-f4c8-46a7-981b-d61b006420d0 is in state STARTED 2026-03-08 01:09:41.976098 | orchestrator | 2026-03-08 01:09:41 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:09:45.029486 | orchestrator | 2026-03-08 01:09:45 | INFO  | Task a616cf87-0e11-4eb2-b8e5-9c2348dde5f0 is in state STARTED 2026-03-08 01:09:45.029549 | orchestrator | 2026-03-08 01:09:45 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:09:45.029557 | orchestrator | 2026-03-08 01:09:45 | INFO  | Task 587947ef-f4c8-46a7-981b-d61b006420d0 is in state SUCCESS 2026-03-08 01:09:45.032562 | orchestrator | 2026-03-08 01:09:45.032598 | orchestrator | 2026-03-08 01:09:45.032606 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-08 01:09:45.032623 | orchestrator | 2026-03-08 01:09:45.032630 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-08 01:09:45.032636 | orchestrator | Sunday 08 March 2026 01:07:41 +0000 (0:00:00.263) 0:00:00.263 ********** 2026-03-08 01:09:45.032642 | orchestrator | ok: [testbed-node-0] 2026-03-08 01:09:45.032649 | orchestrator | ok: [testbed-node-1] 2026-03-08 01:09:45.032656 | orchestrator | ok: [testbed-node-2] 2026-03-08 01:09:45.032665 | orchestrator | 2026-03-08 01:09:45.032671 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-08 01:09:45.032678 | orchestrator | Sunday 08 March 2026 01:07:41 +0000 (0:00:00.297) 0:00:00.561 ********** 2026-03-08 01:09:45.032685 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-03-08 01:09:45.032692 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-03-08 01:09:45.032699 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-03-08 01:09:45.032706 | orchestrator | 2026-03-08 01:09:45.032712 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-03-08 01:09:45.032718 | orchestrator | 2026-03-08 01:09:45.032725 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-03-08 01:09:45.032731 | orchestrator | Sunday 08 March 2026 01:07:42 +0000 (0:00:00.472) 0:00:01.033 ********** 2026-03-08 01:09:45.032737 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 01:09:45.032743 | orchestrator | 2026-03-08 01:09:45.032754 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-03-08 01:09:45.032761 | orchestrator | Sunday 08 March 2026 01:07:43 +0000 (0:00:00.707) 0:00:01.741 ********** 2026-03-08 01:09:45.032769 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-08 01:09:45.032788 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-08 01:09:45.032795 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-08 01:09:45.032802 | orchestrator | 2026-03-08 01:09:45.032809 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-03-08 01:09:45.032815 | orchestrator | Sunday 08 March 2026 01:07:43 +0000 (0:00:00.872) 0:00:02.613 ********** 2026-03-08 01:09:45.032821 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2026-03-08 01:09:45.032828 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2026-03-08 01:09:45.032833 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-08 01:09:45.032839 | orchestrator | 2026-03-08 01:09:45.032845 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-03-08 01:09:45.032851 | orchestrator | Sunday 08 March 2026 01:07:45 +0000 (0:00:01.409) 0:00:04.023 ********** 2026-03-08 01:09:45.032857 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 01:09:45.032863 | orchestrator | 2026-03-08 01:09:45.032870 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-03-08 01:09:45.032876 | orchestrator | Sunday 08 March 2026 01:07:46 +0000 (0:00:00.731) 0:00:04.754 ********** 2026-03-08 01:09:45.032897 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-08 01:09:45.032908 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-08 01:09:45.032920 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-08 01:09:45.032926 | orchestrator | 2026-03-08 01:09:45.032932 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-03-08 01:09:45.032938 | orchestrator | Sunday 08 March 2026 01:07:47 +0000 (0:00:01.552) 0:00:06.307 ********** 2026-03-08 01:09:45.032944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-08 01:09:45.032950 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:09:45.032957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-08 01:09:45.032964 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:09:45.032984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-08 01:09:45.033000 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:09:45.033007 | orchestrator | 2026-03-08 01:09:45.033021 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-03-08 01:09:45.033027 | orchestrator | Sunday 08 March 2026 01:07:48 +0000 (0:00:00.408) 0:00:06.716 ********** 2026-03-08 01:09:45.033034 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-08 01:09:45.033048 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-08 01:09:45.033055 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:09:45.033061 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:09:45.033067 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-08 01:09:45.033074 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:09:45.033080 | orchestrator | 2026-03-08 01:09:45.033087 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-03-08 01:09:45.033092 | orchestrator | Sunday 08 March 2026 01:07:48 +0000 (0:00:00.802) 0:00:07.518 ********** 2026-03-08 01:09:45.033099 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-08 01:09:45.033106 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-08 01:09:45.033117 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-08 01:09:45.033129 | orchestrator | 2026-03-08 01:09:45.033136 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-03-08 01:09:45.033143 | orchestrator | Sunday 08 March 2026 01:07:50 +0000 (0:00:01.455) 0:00:08.974 ********** 2026-03-08 01:09:45.033152 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-08 01:09:45.033160 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-08 01:09:45.033168 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-08 01:09:45.033175 | orchestrator | 2026-03-08 01:09:45.033182 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-03-08 01:09:45.033189 | orchestrator | Sunday 08 March 2026 01:07:51 +0000 (0:00:01.429) 0:00:10.404 ********** 2026-03-08 01:09:45.033196 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:09:45.033203 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:09:45.033208 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:09:45.033215 | orchestrator | 2026-03-08 01:09:45.033221 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-03-08 01:09:45.033228 | orchestrator | Sunday 08 March 2026 01:07:52 +0000 (0:00:00.406) 0:00:10.810 ********** 2026-03-08 01:09:45.033235 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-08 01:09:45.033242 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-08 01:09:45.033250 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-08 01:09:45.033257 | orchestrator | 2026-03-08 01:09:45.033264 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-03-08 01:09:45.033271 | orchestrator | Sunday 08 March 2026 01:07:53 +0000 (0:00:01.314) 0:00:12.125 ********** 2026-03-08 01:09:45.033279 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-08 01:09:45.033286 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-08 01:09:45.033298 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-08 01:09:45.033305 | orchestrator | 2026-03-08 01:09:45.033313 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2026-03-08 01:09:45.033320 | orchestrator | Sunday 08 March 2026 01:07:54 +0000 (0:00:01.146) 0:00:13.271 ********** 2026-03-08 01:09:45.033332 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-08 01:09:45.033339 | orchestrator | 2026-03-08 01:09:45.033347 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2026-03-08 01:09:45.033354 | orchestrator | Sunday 08 March 2026 01:07:55 +0000 (0:00:00.813) 0:00:14.085 ********** 2026-03-08 01:09:45.033361 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2026-03-08 01:09:45.033368 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2026-03-08 01:09:45.033375 | orchestrator | ok: [testbed-node-0] 2026-03-08 01:09:45.033383 | orchestrator | ok: [testbed-node-1] 2026-03-08 01:09:45.033390 | orchestrator | ok: [testbed-node-2] 2026-03-08 01:09:45.033398 | orchestrator | 2026-03-08 01:09:45.033405 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2026-03-08 01:09:45.033426 | orchestrator | Sunday 08 March 2026 01:07:56 +0000 (0:00:00.679) 0:00:14.765 ********** 2026-03-08 01:09:45.033432 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:09:45.033439 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:09:45.033445 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:09:45.033452 | orchestrator | 2026-03-08 01:09:45.033459 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-03-08 01:09:45.033466 | orchestrator | Sunday 08 March 2026 01:07:56 +0000 (0:00:00.544) 0:00:15.309 ********** 2026-03-08 01:09:45.033482 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 121701, 'inode': 1870496, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1197853, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.033491 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 121701, 'inode': 1870496, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1197853, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.033499 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 121701, 'inode': 1870496, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1197853, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.033507 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfsdashboard.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfsdashboard.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 143913, 'inode': 1870503, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1237853, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.033524 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfsdashboard.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfsdashboard.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 143913, 'inode': 1870503, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1237853, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.033532 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfsdashboard.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfsdashboard.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 143913, 'inode': 1870503, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1237853, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.033543 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26019, 'inode': 1870515, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1297853, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.033550 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26019, 'inode': 1870515, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1297853, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.033558 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26019, 'inode': 1870515, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1297853, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.033565 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1870501, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1225908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.033577 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1870501, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1225908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.033589 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1870501, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1225908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.033600 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 170293, 'inode': 1870516, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1307855, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.033608 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 170293, 'inode': 1870516, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1307855, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.033615 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 170293, 'inode': 1870516, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1307855, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.033622 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-nvmeof-performance.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof-performance.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 33297, 'inode': 1870498, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1207852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.033636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-nvmeof-performance.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof-performance.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 33297, 'inode': 1870498, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1207852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.033648 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-nvmeof-performance.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof-performance.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 33297, 'inode': 1870498, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1207852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.033656 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26346, 'inode': 1870507, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1257854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.033666 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26346, 'inode': 1870507, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1257854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.033674 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26346, 'inode': 1870507, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1257854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.033682 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 46110, 'inode': 1870512, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1287854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.033692 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 46110, 'inode': 1870512, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1287854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.033896 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 46110, 'inode': 1870512, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1287854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.033914 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1870495, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1190567, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.033927 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1870495, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1190567, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.033936 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1870495, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1190567, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.033943 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1870497, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1197853, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.033957 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1870497, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1197853, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.033965 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1870497, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1197853, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.033977 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1870502, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1227853, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.033985 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1870502, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1227853, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.034013 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1870502, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1227853, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.034073 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19231, 'inode': 1870509, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1274168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.034087 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19231, 'inode': 1870509, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1274168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.034095 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19231, 'inode': 1870509, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1274168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.034108 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13320, 'inode': 1870514, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1295266, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.034116 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13320, 'inode': 1870514, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1295266, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.034127 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13320, 'inode': 1870514, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1295266, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.034135 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1870500, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1217854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.034167 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1870500, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1217854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.034175 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1870500, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1217854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.034182 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 20042, 'inode': 1870511, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1277854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.034194 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 20042, 'inode': 1870511, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1277854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.034215 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 20042, 'inode': 1870511, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1277854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.034223 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/smb-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/smb-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29877, 'inode': 1870517, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1317854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.034238 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/smb-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/smb-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29877, 'inode': 1870517, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1317854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.034246 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/smb-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/smb-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29877, 'inode': 1870517, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1317854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.034253 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38375, 'inode': 1870508, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1268907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.034271 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38375, 'inode': 1870508, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1268907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.034298 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38375, 'inode': 1870508, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1268907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.034310 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 63043, 'inode': 1870506, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1257854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.034332 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 63043, 'inode': 1870506, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1257854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.034345 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 63043, 'inode': 1870506, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1257854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.034352 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27387, 'inode': 1870505, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.124875, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.034363 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27387, 'inode': 1870505, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.124875, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.034370 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27387, 'inode': 1870505, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.124875, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.034380 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49016, 'inode': 1870510, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1277854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.034388 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49016, 'inode': 1870510, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1277854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.034399 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49016, 'inode': 1870510, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1277854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.034407 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 43303, 'inode': 1870504, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1237853, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.034448 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 43303, 'inode': 1870504, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1237853, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.034458 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 43303, 'inode': 1870504, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1237853, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.034467 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16614, 'inode': 1870513, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1287854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.034473 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16614, 'inode': 1870513, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1287854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.034482 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16614, 'inode': 1870513, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1287854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.034488 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-nvmeof.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 52667, 'inode': 1870499, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.121427, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.034494 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-nvmeof.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 52667, 'inode': 1870499, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.121427, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.034502 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-nvmeof.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 52667, 'inode': 1870499, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.121427, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.034728 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1870540, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.159895, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.034744 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1870540, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.159895, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.034756 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1870540, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.159895, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.034763 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1870525, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1417856, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.034770 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1870525, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1417856, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.034778 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1870525, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1417856, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.034789 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1870522, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1337855, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.034800 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1870522, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1337855, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.034813 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1870522, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1337855, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.034820 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15767, 'inode': 1870529, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1447856, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.034827 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15767, 'inode': 1870529, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1447856, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.034834 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15767, 'inode': 1870529, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1447856, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.034844 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1870519, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1326375, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.034855 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1870519, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1326375, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.034867 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1870519, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1326375, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.034874 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1870533, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1537857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.034881 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1870533, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1537857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.034889 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1870530, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1507857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.034897 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1870533, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1537857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.034911 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1870530, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1507857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.034922 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22303, 'inode': 1870534, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1537857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.034933 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22303, 'inode': 1870534, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1537857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.034940 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1870530, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1507857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.034948 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1870538, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1587858, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.034955 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1870538, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1587858, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.034966 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22303, 'inode': 1870534, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1537857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.034982 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21194, 'inode': 1870532, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1527858, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.034989 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1870538, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1587858, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.034997 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21194, 'inode': 1870532, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1527858, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.035004 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1870527, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1427855, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.035012 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21194, 'inode': 1870532, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1527858, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.035023 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1870527, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1427855, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.035046 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1870524, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1367855, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.035054 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1870527, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1427855, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.035061 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1870524, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1367855, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.035068 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1870526, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1417856, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.035076 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1870524, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1367855, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.035086 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1870526, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1417856, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.035101 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1870523, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1357856, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.035108 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1870526, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1417856, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.035115 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1870523, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1357856, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.035123 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15957, 'inode': 1870528, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1437857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.035130 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1870523, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1357856, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.035137 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15957, 'inode': 1870528, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1437857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.035155 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1870537, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1587858, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.035163 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15957, 'inode': 1870528, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1437857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.035171 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1870537, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1587858, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.035178 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1870536, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1557858, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.035186 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1870537, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1587858, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.035198 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1870536, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1557858, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.035208 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1870520, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1328976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.035218 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1870536, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1557858, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.035224 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1870520, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1328976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.035231 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1870521, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1337855, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.035239 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1870520, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1328976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.035250 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1870521, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1337855, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.035261 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1870531, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1517859, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.035272 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1870521, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1337855, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.035279 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1870531, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1517859, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.035287 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21951, 'inode': 1870535, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1547859, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.035294 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1870531, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1517859, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.035305 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21951, 'inode': 1870535, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1547859, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.035316 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21951, 'inode': 1870535, 'dev': 102, 'nlink': 1, 'atime': 1772928147.0, 'mtime': 1772928147.0, 'ctime': 1772930262.1547859, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-08 01:09:45.035323 | orchestrator | 2026-03-08 01:09:45.035332 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2026-03-08 01:09:45.035339 | orchestrator | Sunday 08 March 2026 01:08:39 +0000 (0:00:42.662) 0:00:57.972 ********** 2026-03-08 01:09:45.035350 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-08 01:09:45.035358 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-08 01:09:45.035366 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-08 01:09:45.035374 | orchestrator | 2026-03-08 01:09:45.035382 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-03-08 01:09:45.035390 | orchestrator | Sunday 08 March 2026 01:08:40 +0000 (0:00:01.032) 0:00:59.005 ********** 2026-03-08 01:09:45.035402 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:09:45.035426 | orchestrator | 2026-03-08 01:09:45.035433 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-03-08 01:09:45.035440 | orchestrator | Sunday 08 March 2026 01:08:42 +0000 (0:00:02.467) 0:01:01.472 ********** 2026-03-08 01:09:45.035446 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:09:45.035454 | orchestrator | 2026-03-08 01:09:45.035462 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-08 01:09:45.035469 | orchestrator | Sunday 08 March 2026 01:08:45 +0000 (0:00:02.584) 0:01:04.056 ********** 2026-03-08 01:09:45.035476 | orchestrator | 2026-03-08 01:09:45.035483 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-08 01:09:45.035491 | orchestrator | Sunday 08 March 2026 01:08:45 +0000 (0:00:00.080) 0:01:04.137 ********** 2026-03-08 01:09:45.035499 | orchestrator | 2026-03-08 01:09:45.035506 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-08 01:09:45.035514 | orchestrator | Sunday 08 March 2026 01:08:45 +0000 (0:00:00.245) 0:01:04.383 ********** 2026-03-08 01:09:45.035521 | orchestrator | 2026-03-08 01:09:45.035529 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-03-08 01:09:45.035536 | orchestrator | Sunday 08 March 2026 01:08:45 +0000 (0:00:00.066) 0:01:04.450 ********** 2026-03-08 01:09:45.035543 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:09:45.035551 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:09:45.035559 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:09:45.035566 | orchestrator | 2026-03-08 01:09:45.035574 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-03-08 01:09:45.035581 | orchestrator | Sunday 08 March 2026 01:08:52 +0000 (0:00:06.962) 0:01:11.412 ********** 2026-03-08 01:09:45.035589 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:09:45.035596 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:09:45.035604 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-03-08 01:09:45.035612 | orchestrator | ok: [testbed-node-0] 2026-03-08 01:09:45.035620 | orchestrator | 2026-03-08 01:09:45.035627 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-03-08 01:09:45.035637 | orchestrator | Sunday 08 March 2026 01:09:07 +0000 (0:00:14.605) 0:01:26.018 ********** 2026-03-08 01:09:45.035644 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:09:45.035650 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:09:45.035657 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:09:45.035664 | orchestrator | 2026-03-08 01:09:45.035671 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-03-08 01:09:45.035677 | orchestrator | Sunday 08 March 2026 01:09:37 +0000 (0:00:30.039) 0:01:56.057 ********** 2026-03-08 01:09:45.035684 | orchestrator | ok: [testbed-node-0] 2026-03-08 01:09:45.035692 | orchestrator | 2026-03-08 01:09:45.035697 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-03-08 01:09:45.035708 | orchestrator | Sunday 08 March 2026 01:09:39 +0000 (0:00:02.180) 0:01:58.237 ********** 2026-03-08 01:09:45.035714 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:09:45.035721 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:09:45.035728 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:09:45.035736 | orchestrator | 2026-03-08 01:09:45.035743 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-03-08 01:09:45.035749 | orchestrator | Sunday 08 March 2026 01:09:40 +0000 (0:00:00.520) 0:01:58.758 ********** 2026-03-08 01:09:45.035757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-03-08 01:09:45.035766 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-03-08 01:09:45.035778 | orchestrator | 2026-03-08 01:09:45.035784 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-03-08 01:09:45.035790 | orchestrator | Sunday 08 March 2026 01:09:42 +0000 (0:00:02.224) 0:02:00.982 ********** 2026-03-08 01:09:45.035796 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:09:45.035802 | orchestrator | 2026-03-08 01:09:45.035808 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 01:09:45.035815 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-08 01:09:45.035822 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-08 01:09:45.035829 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-08 01:09:45.035835 | orchestrator | 2026-03-08 01:09:45.035840 | orchestrator | 2026-03-08 01:09:45.035846 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 01:09:45.035852 | orchestrator | Sunday 08 March 2026 01:09:42 +0000 (0:00:00.269) 0:02:01.252 ********** 2026-03-08 01:09:45.035858 | orchestrator | =============================================================================== 2026-03-08 01:09:45.035865 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 42.66s 2026-03-08 01:09:45.035871 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 30.04s 2026-03-08 01:09:45.035876 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 14.61s 2026-03-08 01:09:45.035882 | orchestrator | grafana : Restart first grafana container ------------------------------- 6.96s 2026-03-08 01:09:45.035889 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.58s 2026-03-08 01:09:45.035895 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.47s 2026-03-08 01:09:45.035902 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.22s 2026-03-08 01:09:45.035909 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.18s 2026-03-08 01:09:45.035916 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.55s 2026-03-08 01:09:45.035923 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.46s 2026-03-08 01:09:45.035930 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.43s 2026-03-08 01:09:45.035936 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 1.41s 2026-03-08 01:09:45.035943 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.31s 2026-03-08 01:09:45.035949 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.15s 2026-03-08 01:09:45.035956 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.03s 2026-03-08 01:09:45.035963 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.87s 2026-03-08 01:09:45.035970 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.81s 2026-03-08 01:09:45.035977 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.80s 2026-03-08 01:09:45.035984 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.73s 2026-03-08 01:09:45.035991 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.71s 2026-03-08 01:09:45.036001 | orchestrator | 2026-03-08 01:09:45 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:09:48.079926 | orchestrator | 2026-03-08 01:09:48 | INFO  | Task a616cf87-0e11-4eb2-b8e5-9c2348dde5f0 is in state STARTED 2026-03-08 01:09:48.083357 | orchestrator | 2026-03-08 01:09:48 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:09:48.083426 | orchestrator | 2026-03-08 01:09:48 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:09:51.132197 | orchestrator | 2026-03-08 01:09:51 | INFO  | Task a616cf87-0e11-4eb2-b8e5-9c2348dde5f0 is in state STARTED 2026-03-08 01:09:51.134258 | orchestrator | 2026-03-08 01:09:51 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:09:51.134319 | orchestrator | 2026-03-08 01:09:51 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:09:54.184191 | orchestrator | 2026-03-08 01:09:54 | INFO  | Task a616cf87-0e11-4eb2-b8e5-9c2348dde5f0 is in state STARTED 2026-03-08 01:09:54.185458 | orchestrator | 2026-03-08 01:09:54 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:09:54.185528 | orchestrator | 2026-03-08 01:09:54 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:09:57.235219 | orchestrator | 2026-03-08 01:09:57 | INFO  | Task a616cf87-0e11-4eb2-b8e5-9c2348dde5f0 is in state STARTED 2026-03-08 01:09:57.238559 | orchestrator | 2026-03-08 01:09:57 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:09:57.238648 | orchestrator | 2026-03-08 01:09:57 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:10:00.280866 | orchestrator | 2026-03-08 01:10:00 | INFO  | Task a616cf87-0e11-4eb2-b8e5-9c2348dde5f0 is in state STARTED 2026-03-08 01:10:00.281118 | orchestrator | 2026-03-08 01:10:00 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:10:00.281169 | orchestrator | 2026-03-08 01:10:00 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:10:03.321856 | orchestrator | 2026-03-08 01:10:03 | INFO  | Task a616cf87-0e11-4eb2-b8e5-9c2348dde5f0 is in state STARTED 2026-03-08 01:10:03.324002 | orchestrator | 2026-03-08 01:10:03 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:10:03.324181 | orchestrator | 2026-03-08 01:10:03 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:10:06.366841 | orchestrator | 2026-03-08 01:10:06 | INFO  | Task a616cf87-0e11-4eb2-b8e5-9c2348dde5f0 is in state STARTED 2026-03-08 01:10:06.368715 | orchestrator | 2026-03-08 01:10:06 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:10:06.368755 | orchestrator | 2026-03-08 01:10:06 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:10:09.409000 | orchestrator | 2026-03-08 01:10:09 | INFO  | Task a616cf87-0e11-4eb2-b8e5-9c2348dde5f0 is in state SUCCESS 2026-03-08 01:10:09.411669 | orchestrator | 2026-03-08 01:10:09 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:10:09.411725 | orchestrator | 2026-03-08 01:10:09 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:10:12.479697 | orchestrator | 2026-03-08 01:10:12 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:10:12.481433 | orchestrator | 2026-03-08 01:10:12 | INFO  | Task 65d4a182-4349-4a47-951e-a853d1ee562d is in state STARTED 2026-03-08 01:10:12.481506 | orchestrator | 2026-03-08 01:10:12 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:10:15.531419 | orchestrator | 2026-03-08 01:10:15 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:10:15.532215 | orchestrator | 2026-03-08 01:10:15 | INFO  | Task 65d4a182-4349-4a47-951e-a853d1ee562d is in state STARTED 2026-03-08 01:10:15.532249 | orchestrator | 2026-03-08 01:10:15 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:10:18.574653 | orchestrator | 2026-03-08 01:10:18 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:10:18.576702 | orchestrator | 2026-03-08 01:10:18 | INFO  | Task 65d4a182-4349-4a47-951e-a853d1ee562d is in state STARTED 2026-03-08 01:10:18.576747 | orchestrator | 2026-03-08 01:10:18 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:10:21.618981 | orchestrator | 2026-03-08 01:10:21 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:10:21.621531 | orchestrator | 2026-03-08 01:10:21 | INFO  | Task 65d4a182-4349-4a47-951e-a853d1ee562d is in state STARTED 2026-03-08 01:10:21.621590 | orchestrator | 2026-03-08 01:10:21 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:10:24.667679 | orchestrator | 2026-03-08 01:10:24 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:10:24.668298 | orchestrator | 2026-03-08 01:10:24 | INFO  | Task 65d4a182-4349-4a47-951e-a853d1ee562d is in state STARTED 2026-03-08 01:10:24.668714 | orchestrator | 2026-03-08 01:10:24 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:10:27.722802 | orchestrator | 2026-03-08 01:10:27 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:10:27.723427 | orchestrator | 2026-03-08 01:10:27 | INFO  | Task 65d4a182-4349-4a47-951e-a853d1ee562d is in state STARTED 2026-03-08 01:10:27.723483 | orchestrator | 2026-03-08 01:10:27 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:10:30.771052 | orchestrator | 2026-03-08 01:10:30 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:10:30.773272 | orchestrator | 2026-03-08 01:10:30 | INFO  | Task 65d4a182-4349-4a47-951e-a853d1ee562d is in state STARTED 2026-03-08 01:10:30.773334 | orchestrator | 2026-03-08 01:10:30 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:10:33.809914 | orchestrator | 2026-03-08 01:10:33 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:10:33.810279 | orchestrator | 2026-03-08 01:10:33 | INFO  | Task 65d4a182-4349-4a47-951e-a853d1ee562d is in state STARTED 2026-03-08 01:10:33.810295 | orchestrator | 2026-03-08 01:10:33 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:10:36.861077 | orchestrator | 2026-03-08 01:10:36 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:10:36.862840 | orchestrator | 2026-03-08 01:10:36 | INFO  | Task 65d4a182-4349-4a47-951e-a853d1ee562d is in state STARTED 2026-03-08 01:10:36.862899 | orchestrator | 2026-03-08 01:10:36 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:10:39.906198 | orchestrator | 2026-03-08 01:10:39 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:10:39.907824 | orchestrator | 2026-03-08 01:10:39 | INFO  | Task 65d4a182-4349-4a47-951e-a853d1ee562d is in state STARTED 2026-03-08 01:10:39.907907 | orchestrator | 2026-03-08 01:10:39 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:10:42.948393 | orchestrator | 2026-03-08 01:10:42 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:10:42.948464 | orchestrator | 2026-03-08 01:10:42 | INFO  | Task 65d4a182-4349-4a47-951e-a853d1ee562d is in state STARTED 2026-03-08 01:10:42.948473 | orchestrator | 2026-03-08 01:10:42 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:10:45.986329 | orchestrator | 2026-03-08 01:10:45 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:10:45.989558 | orchestrator | 2026-03-08 01:10:45 | INFO  | Task 65d4a182-4349-4a47-951e-a853d1ee562d is in state STARTED 2026-03-08 01:10:45.989616 | orchestrator | 2026-03-08 01:10:45 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:10:49.038782 | orchestrator | 2026-03-08 01:10:49 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:10:49.040382 | orchestrator | 2026-03-08 01:10:49 | INFO  | Task 65d4a182-4349-4a47-951e-a853d1ee562d is in state STARTED 2026-03-08 01:10:49.040431 | orchestrator | 2026-03-08 01:10:49 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:10:52.096824 | orchestrator | 2026-03-08 01:10:52 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:10:52.098948 | orchestrator | 2026-03-08 01:10:52 | INFO  | Task 65d4a182-4349-4a47-951e-a853d1ee562d is in state STARTED 2026-03-08 01:10:52.099203 | orchestrator | 2026-03-08 01:10:52 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:10:55.144311 | orchestrator | 2026-03-08 01:10:55 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:10:55.145505 | orchestrator | 2026-03-08 01:10:55 | INFO  | Task 65d4a182-4349-4a47-951e-a853d1ee562d is in state STARTED 2026-03-08 01:10:55.145570 | orchestrator | 2026-03-08 01:10:55 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:10:58.191396 | orchestrator | 2026-03-08 01:10:58 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:10:58.193740 | orchestrator | 2026-03-08 01:10:58 | INFO  | Task 65d4a182-4349-4a47-951e-a853d1ee562d is in state STARTED 2026-03-08 01:10:58.193876 | orchestrator | 2026-03-08 01:10:58 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:11:01.239067 | orchestrator | 2026-03-08 01:11:01 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:11:01.239660 | orchestrator | 2026-03-08 01:11:01 | INFO  | Task 65d4a182-4349-4a47-951e-a853d1ee562d is in state STARTED 2026-03-08 01:11:01.239680 | orchestrator | 2026-03-08 01:11:01 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:11:04.283538 | orchestrator | 2026-03-08 01:11:04 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:13:04.392007 | orchestrator | 2026-03-08 01:13:04 | INFO  | Task 65d4a182-4349-4a47-951e-a853d1ee562d is in state STARTED 2026-03-08 01:13:04.392113 | orchestrator | 2026-03-08 01:13:04 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:13:07.424088 | orchestrator | 2026-03-08 01:13:07 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:13:07.424666 | orchestrator | 2026-03-08 01:13:07 | INFO  | Task 65d4a182-4349-4a47-951e-a853d1ee562d is in state STARTED 2026-03-08 01:13:07.424704 | orchestrator | 2026-03-08 01:13:07 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:13:10.460872 | orchestrator | 2026-03-08 01:13:10 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:13:10.461626 | orchestrator | 2026-03-08 01:13:10 | INFO  | Task 65d4a182-4349-4a47-951e-a853d1ee562d is in state STARTED 2026-03-08 01:13:10.461657 | orchestrator | 2026-03-08 01:13:10 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:13:13.519769 | orchestrator | 2026-03-08 01:13:13 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:13:13.525679 | orchestrator | 2026-03-08 01:13:13 | INFO  | Task 65d4a182-4349-4a47-951e-a853d1ee562d is in state STARTED 2026-03-08 01:13:13.526212 | orchestrator | 2026-03-08 01:13:13 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:13:16.575286 | orchestrator | 2026-03-08 01:13:16 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:13:16.576792 | orchestrator | 2026-03-08 01:13:16 | INFO  | Task 65d4a182-4349-4a47-951e-a853d1ee562d is in state STARTED 2026-03-08 01:13:16.577026 | orchestrator | 2026-03-08 01:13:16 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:13:19.625600 | orchestrator | 2026-03-08 01:13:19 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:13:19.627168 | orchestrator | 2026-03-08 01:13:19 | INFO  | Task 65d4a182-4349-4a47-951e-a853d1ee562d is in state STARTED 2026-03-08 01:13:19.627210 | orchestrator | 2026-03-08 01:13:19 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:13:22.673276 | orchestrator | 2026-03-08 01:13:22 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:13:22.676307 | orchestrator | 2026-03-08 01:13:22 | INFO  | Task 65d4a182-4349-4a47-951e-a853d1ee562d is in state STARTED 2026-03-08 01:13:22.676494 | orchestrator | 2026-03-08 01:13:22 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:13:25.719283 | orchestrator | 2026-03-08 01:13:25 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:13:25.719388 | orchestrator | 2026-03-08 01:13:25 | INFO  | Task 65d4a182-4349-4a47-951e-a853d1ee562d is in state STARTED 2026-03-08 01:13:25.719397 | orchestrator | 2026-03-08 01:13:25 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:13:28.765452 | orchestrator | 2026-03-08 01:13:28 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:13:28.766785 | orchestrator | 2026-03-08 01:13:28 | INFO  | Task 65d4a182-4349-4a47-951e-a853d1ee562d is in state STARTED 2026-03-08 01:13:28.766814 | orchestrator | 2026-03-08 01:13:28 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:13:31.792566 | orchestrator | 2026-03-08 01:13:31 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:13:31.793118 | orchestrator | 2026-03-08 01:13:31 | INFO  | Task 65d4a182-4349-4a47-951e-a853d1ee562d is in state STARTED 2026-03-08 01:13:31.793155 | orchestrator | 2026-03-08 01:13:31 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:13:34.825701 | orchestrator | 2026-03-08 01:13:34 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:13:34.827112 | orchestrator | 2026-03-08 01:13:34 | INFO  | Task 65d4a182-4349-4a47-951e-a853d1ee562d is in state STARTED 2026-03-08 01:13:34.827166 | orchestrator | 2026-03-08 01:13:34 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:13:37.871015 | orchestrator | 2026-03-08 01:13:37 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:13:37.874266 | orchestrator | 2026-03-08 01:13:37 | INFO  | Task 65d4a182-4349-4a47-951e-a853d1ee562d is in state STARTED 2026-03-08 01:13:37.874358 | orchestrator | 2026-03-08 01:13:37 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:13:40.916903 | orchestrator | 2026-03-08 01:13:40 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:13:40.917265 | orchestrator | 2026-03-08 01:13:40 | INFO  | Task 65d4a182-4349-4a47-951e-a853d1ee562d is in state STARTED 2026-03-08 01:13:40.917290 | orchestrator | 2026-03-08 01:13:40 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:13:43.974466 | orchestrator | 2026-03-08 01:13:43 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:13:43.976234 | orchestrator | 2026-03-08 01:13:43 | INFO  | Task 65d4a182-4349-4a47-951e-a853d1ee562d is in state STARTED 2026-03-08 01:13:43.976255 | orchestrator | 2026-03-08 01:13:43 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:13:47.050001 | orchestrator | 2026-03-08 01:13:47 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:13:47.051695 | orchestrator | 2026-03-08 01:13:47 | INFO  | Task 65d4a182-4349-4a47-951e-a853d1ee562d is in state STARTED 2026-03-08 01:13:47.051727 | orchestrator | 2026-03-08 01:13:47 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:13:50.093297 | orchestrator | 2026-03-08 01:13:50 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:13:50.094053 | orchestrator | 2026-03-08 01:13:50 | INFO  | Task 65d4a182-4349-4a47-951e-a853d1ee562d is in state STARTED 2026-03-08 01:13:50.094087 | orchestrator | 2026-03-08 01:13:50 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:13:53.136801 | orchestrator | 2026-03-08 01:13:53 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:13:53.137603 | orchestrator | 2026-03-08 01:13:53 | INFO  | Task 65d4a182-4349-4a47-951e-a853d1ee562d is in state STARTED 2026-03-08 01:13:53.137662 | orchestrator | 2026-03-08 01:13:53 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:13:56.176158 | orchestrator | 2026-03-08 01:13:56 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:13:56.177120 | orchestrator | 2026-03-08 01:13:56 | INFO  | Task 65d4a182-4349-4a47-951e-a853d1ee562d is in state STARTED 2026-03-08 01:13:56.177204 | orchestrator | 2026-03-08 01:13:56 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:13:59.235550 | orchestrator | 2026-03-08 01:13:59 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:13:59.240753 | orchestrator | 2026-03-08 01:13:59 | INFO  | Task 65d4a182-4349-4a47-951e-a853d1ee562d is in state STARTED 2026-03-08 01:13:59.240822 | orchestrator | 2026-03-08 01:13:59 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:14:02.266599 | orchestrator | 2026-03-08 01:14:02 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:14:02.267281 | orchestrator | 2026-03-08 01:14:02 | INFO  | Task 65d4a182-4349-4a47-951e-a853d1ee562d is in state STARTED 2026-03-08 01:14:02.267327 | orchestrator | 2026-03-08 01:14:02 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:14:05.307321 | orchestrator | 2026-03-08 01:14:05 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:14:05.314641 | orchestrator | 2026-03-08 01:14:05 | INFO  | Task 65d4a182-4349-4a47-951e-a853d1ee562d is in state STARTED 2026-03-08 01:14:05.314725 | orchestrator | 2026-03-08 01:14:05 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:14:08.354769 | orchestrator | 2026-03-08 01:14:08 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:14:08.354905 | orchestrator | 2026-03-08 01:14:08 | INFO  | Task 65d4a182-4349-4a47-951e-a853d1ee562d is in state STARTED 2026-03-08 01:14:08.354917 | orchestrator | 2026-03-08 01:14:08 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:14:11.408002 | orchestrator | 2026-03-08 01:14:11 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:14:11.410388 | orchestrator | 2026-03-08 01:14:11 | INFO  | Task 65d4a182-4349-4a47-951e-a853d1ee562d is in state STARTED 2026-03-08 01:14:11.410613 | orchestrator | 2026-03-08 01:14:11 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:14:14.456004 | orchestrator | 2026-03-08 01:14:14 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:14:14.458186 | orchestrator | 2026-03-08 01:14:14 | INFO  | Task 65d4a182-4349-4a47-951e-a853d1ee562d is in state STARTED 2026-03-08 01:14:14.458827 | orchestrator | 2026-03-08 01:14:14 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:14:17.500738 | orchestrator | 2026-03-08 01:14:17 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:14:17.501129 | orchestrator | 2026-03-08 01:14:17 | INFO  | Task 65d4a182-4349-4a47-951e-a853d1ee562d is in state STARTED 2026-03-08 01:14:17.501156 | orchestrator | 2026-03-08 01:14:17 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:14:20.547990 | orchestrator | 2026-03-08 01:14:20 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:14:20.548700 | orchestrator | 2026-03-08 01:14:20 | INFO  | Task 65d4a182-4349-4a47-951e-a853d1ee562d is in state STARTED 2026-03-08 01:14:20.548787 | orchestrator | 2026-03-08 01:14:20 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:14:23.602087 | orchestrator | 2026-03-08 01:14:23 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:14:23.603487 | orchestrator | 2026-03-08 01:14:23 | INFO  | Task 65d4a182-4349-4a47-951e-a853d1ee562d is in state STARTED 2026-03-08 01:14:23.604023 | orchestrator | 2026-03-08 01:14:23 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:14:26.654410 | orchestrator | 2026-03-08 01:14:26 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:14:26.656326 | orchestrator | 2026-03-08 01:14:26 | INFO  | Task 65d4a182-4349-4a47-951e-a853d1ee562d is in state STARTED 2026-03-08 01:14:26.656386 | orchestrator | 2026-03-08 01:14:26 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:14:29.700071 | orchestrator | 2026-03-08 01:14:29 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:14:29.702165 | orchestrator | 2026-03-08 01:14:29 | INFO  | Task 65d4a182-4349-4a47-951e-a853d1ee562d is in state STARTED 2026-03-08 01:14:29.702213 | orchestrator | 2026-03-08 01:14:29 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:14:32.745391 | orchestrator | 2026-03-08 01:14:32 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:14:32.746301 | orchestrator | 2026-03-08 01:14:32 | INFO  | Task 65d4a182-4349-4a47-951e-a853d1ee562d is in state STARTED 2026-03-08 01:14:32.746337 | orchestrator | 2026-03-08 01:14:32 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:14:35.795136 | orchestrator | 2026-03-08 01:14:35 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:14:35.797855 | orchestrator | 2026-03-08 01:14:35 | INFO  | Task 65d4a182-4349-4a47-951e-a853d1ee562d is in state STARTED 2026-03-08 01:14:35.797936 | orchestrator | 2026-03-08 01:14:35 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:14:38.837157 | orchestrator | 2026-03-08 01:14:38 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:14:38.837869 | orchestrator | 2026-03-08 01:14:38 | INFO  | Task 65d4a182-4349-4a47-951e-a853d1ee562d is in state STARTED 2026-03-08 01:14:38.837892 | orchestrator | 2026-03-08 01:14:38 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:14:41.869249 | orchestrator | 2026-03-08 01:14:41 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state STARTED 2026-03-08 01:14:41.872210 | orchestrator | 2026-03-08 01:14:41 | INFO  | Task 65d4a182-4349-4a47-951e-a853d1ee562d is in state STARTED 2026-03-08 01:14:41.873107 | orchestrator | 2026-03-08 01:14:41 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:14:44.916747 | orchestrator | 2026-03-08 01:14:44.916793 | orchestrator | 2026-03-08 01:14:44.916800 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-08 01:14:44.916823 | orchestrator | 2026-03-08 01:14:44.916829 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-08 01:14:44.916834 | orchestrator | Sunday 08 March 2026 01:06:31 +0000 (0:00:00.181) 0:00:00.181 ********** 2026-03-08 01:14:44.916838 | orchestrator | ok: [testbed-node-0] 2026-03-08 01:14:44.916842 | orchestrator | ok: [testbed-node-1] 2026-03-08 01:14:44.916845 | orchestrator | ok: [testbed-node-2] 2026-03-08 01:14:44.916848 | orchestrator | 2026-03-08 01:14:44.916853 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-08 01:14:44.916858 | orchestrator | Sunday 08 March 2026 01:06:32 +0000 (0:00:00.315) 0:00:00.497 ********** 2026-03-08 01:14:44.916863 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2026-03-08 01:14:44.916868 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2026-03-08 01:14:44.916873 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2026-03-08 01:14:44.916878 | orchestrator | 2026-03-08 01:14:44.916882 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2026-03-08 01:14:44.916887 | orchestrator | 2026-03-08 01:14:44.916892 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2026-03-08 01:14:44.916897 | orchestrator | Sunday 08 March 2026 01:06:32 +0000 (0:00:00.672) 0:00:01.169 ********** 2026-03-08 01:14:44.916902 | orchestrator | 2026-03-08 01:14:44.916907 | orchestrator | STILL ALIVE [task 'Waiting for Nova public port to be UP' is running] ********** 2026-03-08 01:14:44.916913 | orchestrator | 2026-03-08 01:14:44.916918 | orchestrator | STILL ALIVE [task 'Waiting for Nova public port to be UP' is running] ********** 2026-03-08 01:14:44.916923 | orchestrator | 2026-03-08 01:14:44.916927 | orchestrator | STILL ALIVE [task 'Waiting for Nova public port to be UP' is running] ********** 2026-03-08 01:14:44.916930 | orchestrator | ok: [testbed-node-0] 2026-03-08 01:14:44.916933 | orchestrator | ok: [testbed-node-1] 2026-03-08 01:14:44.916936 | orchestrator | ok: [testbed-node-2] 2026-03-08 01:14:44.916939 | orchestrator | 2026-03-08 01:14:44.916942 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 01:14:44.916953 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 01:14:44.916957 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 01:14:44.916960 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 01:14:44.916963 | orchestrator | 2026-03-08 01:14:44.916966 | orchestrator | 2026-03-08 01:14:44.916969 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 01:14:44.916973 | orchestrator | Sunday 08 March 2026 01:10:08 +0000 (0:03:35.854) 0:03:37.024 ********** 2026-03-08 01:14:44.916976 | orchestrator | =============================================================================== 2026-03-08 01:14:44.916979 | orchestrator | Waiting for Nova public port to be UP --------------------------------- 215.85s 2026-03-08 01:14:44.916982 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.67s 2026-03-08 01:14:44.916985 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.32s 2026-03-08 01:14:44.916988 | orchestrator | 2026-03-08 01:14:44.917006 | orchestrator | 2026-03-08 01:14:44 | INFO  | Task 760542ba-76b5-4179-b658-6ad67af063bf is in state SUCCESS 2026-03-08 01:14:44.918451 | orchestrator | 2026-03-08 01:14:44.918497 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-08 01:14:44.918506 | orchestrator | 2026-03-08 01:14:44.918513 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-03-08 01:14:44.918520 | orchestrator | Sunday 08 March 2026 01:05:53 +0000 (0:00:00.290) 0:00:00.290 ********** 2026-03-08 01:14:44.918526 | orchestrator | changed: [testbed-manager] 2026-03-08 01:14:44.918533 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:14:44.918539 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:14:44.918585 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:14:44.918592 | orchestrator | changed: [testbed-node-3] 2026-03-08 01:14:44.918598 | orchestrator | changed: [testbed-node-4] 2026-03-08 01:14:44.918604 | orchestrator | changed: [testbed-node-5] 2026-03-08 01:14:44.918611 | orchestrator | 2026-03-08 01:14:44.918617 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-08 01:14:44.918623 | orchestrator | Sunday 08 March 2026 01:05:54 +0000 (0:00:00.847) 0:00:01.138 ********** 2026-03-08 01:14:44.918630 | orchestrator | changed: [testbed-manager] 2026-03-08 01:14:44.918636 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:14:44.918642 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:14:44.918649 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:14:44.918655 | orchestrator | changed: [testbed-node-3] 2026-03-08 01:14:44.918661 | orchestrator | changed: [testbed-node-4] 2026-03-08 01:14:44.918668 | orchestrator | changed: [testbed-node-5] 2026-03-08 01:14:44.918674 | orchestrator | 2026-03-08 01:14:44.918680 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-08 01:14:44.918687 | orchestrator | Sunday 08 March 2026 01:05:55 +0000 (0:00:01.419) 0:00:02.558 ********** 2026-03-08 01:14:44.918693 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-03-08 01:14:44.918700 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-03-08 01:14:44.918707 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-03-08 01:14:44.918713 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-03-08 01:14:44.918719 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-03-08 01:14:44.918726 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-03-08 01:14:44.918732 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-03-08 01:14:44.918738 | orchestrator | 2026-03-08 01:14:44.918745 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-03-08 01:14:44.918751 | orchestrator | 2026-03-08 01:14:44.918783 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-03-08 01:14:44.918790 | orchestrator | Sunday 08 March 2026 01:05:57 +0000 (0:00:01.370) 0:00:03.929 ********** 2026-03-08 01:14:44.918795 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 01:14:44.918808 | orchestrator | 2026-03-08 01:14:44.918814 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-03-08 01:14:44.918825 | orchestrator | Sunday 08 March 2026 01:05:58 +0000 (0:00:01.352) 0:00:05.281 ********** 2026-03-08 01:14:44.918845 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-03-08 01:14:44.918852 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-03-08 01:14:44.918878 | orchestrator | 2026-03-08 01:14:44.918886 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-03-08 01:14:44.918893 | orchestrator | Sunday 08 March 2026 01:06:03 +0000 (0:00:04.613) 0:00:09.895 ********** 2026-03-08 01:14:44.918900 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-08 01:14:44.918906 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-08 01:14:44.918913 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:14:44.918976 | orchestrator | 2026-03-08 01:14:44.918983 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-03-08 01:14:44.918986 | orchestrator | Sunday 08 March 2026 01:06:08 +0000 (0:00:05.139) 0:00:15.034 ********** 2026-03-08 01:14:44.919008 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:14:44.919013 | orchestrator | 2026-03-08 01:14:44.919016 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-03-08 01:14:44.919020 | orchestrator | Sunday 08 March 2026 01:06:09 +0000 (0:00:00.659) 0:00:15.693 ********** 2026-03-08 01:14:44.919024 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:14:44.919028 | orchestrator | 2026-03-08 01:14:44.919031 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-03-08 01:14:44.919035 | orchestrator | Sunday 08 March 2026 01:06:10 +0000 (0:00:01.588) 0:00:17.282 ********** 2026-03-08 01:14:44.919045 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:14:44.919049 | orchestrator | 2026-03-08 01:14:44.919052 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-08 01:14:44.919063 | orchestrator | Sunday 08 March 2026 01:06:13 +0000 (0:00:02.674) 0:00:19.956 ********** 2026-03-08 01:14:44.919067 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:14:44.919070 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:44.919074 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:44.919078 | orchestrator | 2026-03-08 01:14:44.919081 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-03-08 01:14:44.919085 | orchestrator | Sunday 08 March 2026 01:06:13 +0000 (0:00:00.434) 0:00:20.391 ********** 2026-03-08 01:14:44.919089 | orchestrator | ok: [testbed-node-0] 2026-03-08 01:14:44.919093 | orchestrator | 2026-03-08 01:14:44.919097 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-03-08 01:14:44.919100 | orchestrator | Sunday 08 March 2026 01:06:49 +0000 (0:00:35.262) 0:00:55.653 ********** 2026-03-08 01:14:44.919104 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:14:44.919108 | orchestrator | 2026-03-08 01:14:44.919111 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-08 01:14:44.919115 | orchestrator | Sunday 08 March 2026 01:07:03 +0000 (0:00:14.967) 0:01:10.621 ********** 2026-03-08 01:14:44.919119 | orchestrator | ok: [testbed-node-0] 2026-03-08 01:14:44.919123 | orchestrator | 2026-03-08 01:14:44.919126 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-08 01:14:44.919130 | orchestrator | Sunday 08 March 2026 01:07:18 +0000 (0:00:14.468) 0:01:25.089 ********** 2026-03-08 01:14:44.919142 | orchestrator | ok: [testbed-node-0] 2026-03-08 01:14:44.919146 | orchestrator | 2026-03-08 01:14:44.919150 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-03-08 01:14:44.919154 | orchestrator | Sunday 08 March 2026 01:07:19 +0000 (0:00:01.251) 0:01:26.340 ********** 2026-03-08 01:14:44.919158 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:14:44.919161 | orchestrator | 2026-03-08 01:14:44.919165 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-08 01:14:44.919169 | orchestrator | Sunday 08 March 2026 01:07:20 +0000 (0:00:00.490) 0:01:26.831 ********** 2026-03-08 01:14:44.919173 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 01:14:44.919177 | orchestrator | 2026-03-08 01:14:44.919180 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-03-08 01:14:44.919184 | orchestrator | Sunday 08 March 2026 01:07:20 +0000 (0:00:00.533) 0:01:27.364 ********** 2026-03-08 01:14:44.919188 | orchestrator | ok: [testbed-node-0] 2026-03-08 01:14:44.919192 | orchestrator | 2026-03-08 01:14:44.919195 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-03-08 01:14:44.919199 | orchestrator | Sunday 08 March 2026 01:07:38 +0000 (0:00:18.170) 0:01:45.534 ********** 2026-03-08 01:14:44.919203 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:14:44.919207 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:44.919210 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:44.919214 | orchestrator | 2026-03-08 01:14:44.919218 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-03-08 01:14:44.919221 | orchestrator | 2026-03-08 01:14:44.919226 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-03-08 01:14:44.919232 | orchestrator | Sunday 08 March 2026 01:07:39 +0000 (0:00:00.378) 0:01:45.913 ********** 2026-03-08 01:14:44.919238 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 01:14:44.919247 | orchestrator | 2026-03-08 01:14:44.919254 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-03-08 01:14:44.919261 | orchestrator | Sunday 08 March 2026 01:07:39 +0000 (0:00:00.681) 0:01:46.595 ********** 2026-03-08 01:14:44.919267 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:44.919278 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:44.919283 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:14:44.919289 | orchestrator | 2026-03-08 01:14:44.919295 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-03-08 01:14:44.919302 | orchestrator | Sunday 08 March 2026 01:07:41 +0000 (0:00:01.949) 0:01:48.544 ********** 2026-03-08 01:14:44.919308 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:44.919315 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:44.919321 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:14:44.919328 | orchestrator | 2026-03-08 01:14:44.919335 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-03-08 01:14:44.919343 | orchestrator | Sunday 08 March 2026 01:07:44 +0000 (0:00:02.583) 0:01:51.128 ********** 2026-03-08 01:14:44.919347 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:14:44.919351 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:44.919355 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:44.919358 | orchestrator | 2026-03-08 01:14:44.919362 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-03-08 01:14:44.919366 | orchestrator | Sunday 08 March 2026 01:07:44 +0000 (0:00:00.448) 0:01:51.577 ********** 2026-03-08 01:14:44.919370 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-08 01:14:44.919373 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:44.919377 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-08 01:14:44.919381 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:44.919385 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-08 01:14:44.919388 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-03-08 01:14:44.919392 | orchestrator | 2026-03-08 01:14:44.919396 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-03-08 01:14:44.919400 | orchestrator | Sunday 08 March 2026 01:07:54 +0000 (0:00:09.504) 0:02:01.081 ********** 2026-03-08 01:14:44.919404 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:14:44.919407 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:44.919411 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:44.919415 | orchestrator | 2026-03-08 01:14:44.919419 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-03-08 01:14:44.919422 | orchestrator | Sunday 08 March 2026 01:07:54 +0000 (0:00:00.329) 0:02:01.411 ********** 2026-03-08 01:14:44.919426 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-08 01:14:44.919430 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:14:44.919436 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-08 01:14:44.919440 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:44.919444 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-08 01:14:44.919448 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:44.919451 | orchestrator | 2026-03-08 01:14:44.919455 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-03-08 01:14:44.919459 | orchestrator | Sunday 08 March 2026 01:07:55 +0000 (0:00:00.578) 0:02:01.989 ********** 2026-03-08 01:14:44.919463 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:44.919467 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:14:44.919470 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:44.919481 | orchestrator | 2026-03-08 01:14:44.919485 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-03-08 01:14:44.919488 | orchestrator | Sunday 08 March 2026 01:07:55 +0000 (0:00:00.544) 0:02:02.534 ********** 2026-03-08 01:14:44.919492 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:44.919496 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:44.919500 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:14:44.919503 | orchestrator | 2026-03-08 01:14:44.919507 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-03-08 01:14:44.919511 | orchestrator | Sunday 08 March 2026 01:07:56 +0000 (0:00:00.996) 0:02:03.530 ********** 2026-03-08 01:14:44.919520 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:44.919524 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:44.919532 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:14:44.919536 | orchestrator | 2026-03-08 01:14:44.919540 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-03-08 01:14:44.919576 | orchestrator | Sunday 08 March 2026 01:07:59 +0000 (0:00:02.234) 0:02:05.765 ********** 2026-03-08 01:14:44.919582 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:44.919586 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:44.919589 | orchestrator | ok: [testbed-node-0] 2026-03-08 01:14:44.919593 | orchestrator | 2026-03-08 01:14:44.919597 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-08 01:14:44.919601 | orchestrator | Sunday 08 March 2026 01:08:22 +0000 (0:00:23.042) 0:02:28.807 ********** 2026-03-08 01:14:44.919604 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:44.919608 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:44.919612 | orchestrator | ok: [testbed-node-0] 2026-03-08 01:14:44.919616 | orchestrator | 2026-03-08 01:14:44.919645 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-08 01:14:44.919649 | orchestrator | Sunday 08 March 2026 01:08:36 +0000 (0:00:14.140) 0:02:42.948 ********** 2026-03-08 01:14:44.919653 | orchestrator | ok: [testbed-node-0] 2026-03-08 01:14:44.919657 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:44.919660 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:44.919664 | orchestrator | 2026-03-08 01:14:44.919668 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-03-08 01:14:44.919672 | orchestrator | Sunday 08 March 2026 01:08:37 +0000 (0:00:00.904) 0:02:43.852 ********** 2026-03-08 01:14:44.919688 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:44.919692 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:44.919696 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:14:44.919700 | orchestrator | 2026-03-08 01:14:44.919704 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-03-08 01:14:44.919708 | orchestrator | Sunday 08 March 2026 01:08:50 +0000 (0:00:13.698) 0:02:57.551 ********** 2026-03-08 01:14:44.919711 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:14:44.919715 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:44.919719 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:44.919742 | orchestrator | 2026-03-08 01:14:44.919746 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-03-08 01:14:44.919774 | orchestrator | Sunday 08 March 2026 01:08:51 +0000 (0:00:01.071) 0:02:58.622 ********** 2026-03-08 01:14:44.919778 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:14:44.919782 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:44.919786 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:44.919789 | orchestrator | 2026-03-08 01:14:44.919793 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-03-08 01:14:44.919797 | orchestrator | 2026-03-08 01:14:44.919800 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-08 01:14:44.919804 | orchestrator | Sunday 08 March 2026 01:08:52 +0000 (0:00:00.544) 0:02:59.167 ********** 2026-03-08 01:14:44.919808 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 01:14:44.919836 | orchestrator | 2026-03-08 01:14:44.919840 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2026-03-08 01:14:44.919844 | orchestrator | Sunday 08 March 2026 01:08:53 +0000 (0:00:00.593) 0:02:59.761 ********** 2026-03-08 01:14:44.919848 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-03-08 01:14:44.919852 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-03-08 01:14:44.919856 | orchestrator | 2026-03-08 01:14:44.919860 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2026-03-08 01:14:44.919863 | orchestrator | Sunday 08 March 2026 01:08:56 +0000 (0:00:03.371) 0:03:03.132 ********** 2026-03-08 01:14:44.919871 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-03-08 01:14:44.919875 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-03-08 01:14:44.919879 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-03-08 01:14:44.919883 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-03-08 01:14:44.919887 | orchestrator | 2026-03-08 01:14:44.919890 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-03-08 01:14:44.919894 | orchestrator | Sunday 08 March 2026 01:09:03 +0000 (0:00:06.814) 0:03:09.947 ********** 2026-03-08 01:14:44.919904 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-08 01:14:44.919908 | orchestrator | 2026-03-08 01:14:44.919912 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-03-08 01:14:44.919915 | orchestrator | Sunday 08 March 2026 01:09:06 +0000 (0:00:03.651) 0:03:13.599 ********** 2026-03-08 01:14:44.919919 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-03-08 01:14:44.919923 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-08 01:14:44.919927 | orchestrator | 2026-03-08 01:14:44.919930 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-03-08 01:14:44.919934 | orchestrator | Sunday 08 March 2026 01:09:11 +0000 (0:00:04.507) 0:03:18.106 ********** 2026-03-08 01:14:44.919938 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-08 01:14:44.919942 | orchestrator | 2026-03-08 01:14:44.919945 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2026-03-08 01:14:44.919949 | orchestrator | Sunday 08 March 2026 01:09:14 +0000 (0:00:03.200) 0:03:21.307 ********** 2026-03-08 01:14:44.919953 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-03-08 01:14:44.919957 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-03-08 01:14:44.919960 | orchestrator | 2026-03-08 01:14:44.919964 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-03-08 01:14:44.919971 | orchestrator | Sunday 08 March 2026 01:09:21 +0000 (0:00:06.607) 0:03:27.914 ********** 2026-03-08 01:14:44.919977 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-08 01:14:44.919984 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-08 01:14:44.919993 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-08 01:14:44.920001 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-08 01:14:44.920006 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-08 01:14:44.920010 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-08 01:14:44.920014 | orchestrator | 2026-03-08 01:14:44.920018 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-03-08 01:14:44.920022 | orchestrator | Sunday 08 March 2026 01:09:22 +0000 (0:00:01.250) 0:03:29.165 ********** 2026-03-08 01:14:44.920029 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:14:44.920032 | orchestrator | 2026-03-08 01:14:44.920036 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-03-08 01:14:44.920040 | orchestrator | Sunday 08 March 2026 01:09:22 +0000 (0:00:00.143) 0:03:29.309 ********** 2026-03-08 01:14:44.920044 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:14:44.920047 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:44.920051 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:44.920055 | orchestrator | 2026-03-08 01:14:44.920059 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-03-08 01:14:44.920062 | orchestrator | Sunday 08 March 2026 01:09:23 +0000 (0:00:00.516) 0:03:29.825 ********** 2026-03-08 01:14:44.920066 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-08 01:14:44.920070 | orchestrator | 2026-03-08 01:14:44.920074 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-03-08 01:14:44.920077 | orchestrator | Sunday 08 March 2026 01:09:23 +0000 (0:00:00.763) 0:03:30.589 ********** 2026-03-08 01:14:44.920081 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:14:44.920085 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:44.920089 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:44.920092 | orchestrator | 2026-03-08 01:14:44.920096 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-08 01:14:44.920100 | orchestrator | Sunday 08 March 2026 01:09:24 +0000 (0:00:00.326) 0:03:30.915 ********** 2026-03-08 01:14:44.920103 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 01:14:44.920107 | orchestrator | 2026-03-08 01:14:44.920111 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-03-08 01:14:44.920115 | orchestrator | Sunday 08 March 2026 01:09:24 +0000 (0:00:00.547) 0:03:31.463 ********** 2026-03-08 01:14:44.920121 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-08 01:14:44.920129 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-08 01:14:44.920136 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-08 01:14:44.920141 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-08 01:14:44.920147 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-08 01:14:44.920153 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-08 01:14:44.920157 | orchestrator | 2026-03-08 01:14:44.920161 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-03-08 01:14:44.920165 | orchestrator | Sunday 08 March 2026 01:09:27 +0000 (0:00:02.972) 0:03:34.435 ********** 2026-03-08 01:14:44.920169 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-08 01:14:44.920176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-08 01:14:44.920180 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:14:44.920187 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-08 01:14:44.920191 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-08 01:14:44.920195 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:44.920202 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-08 01:14:44.920209 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-08 01:14:44.920213 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:44.920223 | orchestrator | 2026-03-08 01:14:44.920228 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-03-08 01:14:44.920231 | orchestrator | Sunday 08 March 2026 01:09:28 +0000 (0:00:00.582) 0:03:35.018 ********** 2026-03-08 01:14:44.920236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-08 01:14:44.920242 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-08 01:14:44.920251 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:14:44.920635 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-08 01:14:44.920652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-08 01:14:44.920656 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:44.920661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-08 01:14:44.920668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-08 01:14:44.920672 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:44.920676 | orchestrator | 2026-03-08 01:14:44.920680 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-03-08 01:14:44.920684 | orchestrator | Sunday 08 March 2026 01:09:29 +0000 (0:00:00.786) 0:03:35.805 ********** 2026-03-08 01:14:44.920691 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-08 01:14:44.920698 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-08 01:14:44.920703 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-08 01:14:44.920709 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-08 01:14:44.920716 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-08 01:14:44.920724 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-08 01:14:44.920728 | orchestrator | 2026-03-08 01:14:44.920732 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-03-08 01:14:44.920736 | orchestrator | Sunday 08 March 2026 01:09:32 +0000 (0:00:02.859) 0:03:38.664 ********** 2026-03-08 01:14:44.920740 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-08 01:14:44.920746 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-08 01:14:44.920753 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-08 01:14:44.920760 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-08 01:14:44.920764 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-08 01:14:44.920768 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-08 01:14:44.920772 | orchestrator | 2026-03-08 01:14:44.920776 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-03-08 01:14:44.920780 | orchestrator | Sunday 08 March 2026 01:09:37 +0000 (0:00:05.435) 0:03:44.100 ********** 2026-03-08 01:14:44.920786 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-08 01:14:44.920826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-08 01:14:44.920830 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:14:44.920835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-08 01:14:44.920839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-08 01:14:44.920843 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:44.920849 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-08 01:14:44.920856 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-08 01:14:44.920860 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:44.920863 | orchestrator | 2026-03-08 01:14:44.920867 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-03-08 01:14:44.920871 | orchestrator | Sunday 08 March 2026 01:09:38 +0000 (0:00:00.602) 0:03:44.703 ********** 2026-03-08 01:14:44.920875 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:14:44.920878 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:14:44.920882 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:14:44.920886 | orchestrator | 2026-03-08 01:14:44.920902 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2026-03-08 01:14:44.920906 | orchestrator | Sunday 08 March 2026 01:09:39 +0000 (0:00:01.484) 0:03:46.187 ********** 2026-03-08 01:14:44.920910 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:14:44.920913 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:44.920917 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:44.920921 | orchestrator | 2026-03-08 01:14:44.920925 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2026-03-08 01:14:44.920928 | orchestrator | Sunday 08 March 2026 01:09:39 +0000 (0:00:00.332) 0:03:46.520 ********** 2026-03-08 01:14:44.920932 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-08 01:14:44.920937 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-08 01:14:44.920948 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-08 01:14:44.920952 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-08 01:14:44.920956 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-08 01:14:44.920960 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-08 01:14:44.920964 | orchestrator | 2026-03-08 01:14:44.920968 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-08 01:14:44.920972 | orchestrator | Sunday 08 March 2026 01:09:42 +0000 (0:00:02.232) 0:03:48.752 ********** 2026-03-08 01:14:44.920976 | orchestrator | 2026-03-08 01:14:44.920980 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-08 01:14:44.920983 | orchestrator | Sunday 08 March 2026 01:09:42 +0000 (0:00:00.136) 0:03:48.889 ********** 2026-03-08 01:14:44.920987 | orchestrator | 2026-03-08 01:14:44.920991 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-08 01:14:44.920995 | orchestrator | Sunday 08 March 2026 01:09:42 +0000 (0:00:00.131) 0:03:49.020 ********** 2026-03-08 01:14:44.920998 | orchestrator | 2026-03-08 01:14:44.921005 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-03-08 01:14:44.921009 | orchestrator | Sunday 08 March 2026 01:09:42 +0000 (0:00:00.142) 0:03:49.163 ********** 2026-03-08 01:14:44.921012 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:14:44.921016 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:14:44.921020 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:14:44.921024 | orchestrator | 2026-03-08 01:14:44.921027 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-03-08 01:14:44.921031 | orchestrator | Sunday 08 March 2026 01:10:01 +0000 (0:00:18.690) 0:04:07.854 ********** 2026-03-08 01:14:44.921035 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:14:44.921039 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:14:44.921042 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:14:44.921046 | orchestrator | 2026-03-08 01:14:44.921050 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-03-08 01:14:44.921054 | orchestrator | 2026-03-08 01:14:44.921057 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-08 01:14:44.921061 | orchestrator | Sunday 08 March 2026 01:10:11 +0000 (0:00:10.168) 0:04:18.022 ********** 2026-03-08 01:14:44.921067 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 01:14:44.921071 | orchestrator | 2026-03-08 01:14:44.921075 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-08 01:14:44.921079 | orchestrator | Sunday 08 March 2026 01:10:12 +0000 (0:00:01.249) 0:04:19.271 ********** 2026-03-08 01:14:44.921083 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:14:44.921086 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:14:44.921090 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:14:44.921094 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:14:44.921098 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:44.921101 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:44.921105 | orchestrator | 2026-03-08 01:14:44.921109 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-03-08 01:14:44.921112 | orchestrator | Sunday 08 March 2026 01:10:13 +0000 (0:00:00.627) 0:04:19.899 ********** 2026-03-08 01:14:44.921116 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:14:44.921120 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:44.921124 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:44.921127 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 01:14:44.921131 | orchestrator | 2026-03-08 01:14:44.921135 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-08 01:14:44.921141 | orchestrator | Sunday 08 March 2026 01:10:14 +0000 (0:00:01.049) 0:04:20.949 ********** 2026-03-08 01:14:44.921145 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-03-08 01:14:44.921149 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-03-08 01:14:44.921153 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-03-08 01:14:44.921157 | orchestrator | 2026-03-08 01:14:44.921160 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-08 01:14:44.921164 | orchestrator | Sunday 08 March 2026 01:10:15 +0000 (0:00:00.773) 0:04:21.722 ********** 2026-03-08 01:14:44.921168 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-03-08 01:14:44.921172 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-03-08 01:14:44.921175 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-03-08 01:14:44.921179 | orchestrator | 2026-03-08 01:14:44.921183 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-08 01:14:44.921187 | orchestrator | Sunday 08 March 2026 01:10:16 +0000 (0:00:01.411) 0:04:23.134 ********** 2026-03-08 01:14:44.921191 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-03-08 01:14:44.921194 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:14:44.921200 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-03-08 01:14:44.921204 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:14:44.921208 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-03-08 01:14:44.921212 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:14:44.921215 | orchestrator | 2026-03-08 01:14:44.921219 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-03-08 01:14:44.921223 | orchestrator | Sunday 08 March 2026 01:10:17 +0000 (0:00:00.560) 0:04:23.694 ********** 2026-03-08 01:14:44.921227 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-08 01:14:44.921230 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-08 01:14:44.921234 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:14:44.921238 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-08 01:14:44.921242 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-08 01:14:44.921246 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-08 01:14:44.921249 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:44.921253 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-08 01:14:44.921257 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-08 01:14:44.921261 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:44.921265 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-08 01:14:44.921269 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-08 01:14:44.921273 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-08 01:14:44.921278 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-08 01:14:44.921282 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-08 01:14:44.921287 | orchestrator | 2026-03-08 01:14:44.921291 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-03-08 01:14:44.921295 | orchestrator | Sunday 08 March 2026 01:10:20 +0000 (0:00:03.252) 0:04:26.947 ********** 2026-03-08 01:14:44.921300 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:14:44.921304 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:44.921308 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:44.921312 | orchestrator | changed: [testbed-node-3] 2026-03-08 01:14:44.921317 | orchestrator | changed: [testbed-node-4] 2026-03-08 01:14:44.921321 | orchestrator | changed: [testbed-node-5] 2026-03-08 01:14:44.921325 | orchestrator | 2026-03-08 01:14:44.921370 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-03-08 01:14:44.921388 | orchestrator | Sunday 08 March 2026 01:10:21 +0000 (0:00:01.185) 0:04:28.133 ********** 2026-03-08 01:14:44.921393 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:14:44.921398 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:44.921402 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:44.921407 | orchestrator | changed: [testbed-node-3] 2026-03-08 01:14:44.921411 | orchestrator | changed: [testbed-node-4] 2026-03-08 01:14:44.921415 | orchestrator | changed: [testbed-node-5] 2026-03-08 01:14:44.921420 | orchestrator | 2026-03-08 01:14:44.921424 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-03-08 01:14:44.921431 | orchestrator | Sunday 08 March 2026 01:10:23 +0000 (0:00:01.815) 0:04:29.948 ********** 2026-03-08 01:14:44.921436 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-08 01:14:44.921449 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-08 01:14:44.921455 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-08 01:14:44.921460 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-08 01:14:44.921465 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-08 01:14:44.921472 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-08 01:14:44.921481 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-08 01:14:44.921491 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-08 01:14:44.921498 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-08 01:14:44.921506 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-08 01:14:44.921517 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-08 01:14:44.921525 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-08 01:14:44.921536 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-08 01:14:44.921585 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-08 01:14:44.921595 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-08 01:14:44.921602 | orchestrator | 2026-03-08 01:14:44.921609 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-08 01:14:44.921616 | orchestrator | Sunday 08 March 2026 01:10:25 +0000 (0:00:02.356) 0:04:32.305 ********** 2026-03-08 01:14:44.921620 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 01:14:44.921625 | orchestrator | 2026-03-08 01:14:44.921629 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-03-08 01:14:44.921633 | orchestrator | Sunday 08 March 2026 01:10:26 +0000 (0:00:01.255) 0:04:33.560 ********** 2026-03-08 01:14:44.921637 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-08 01:14:44.921644 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-08 01:14:44.921658 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-08 01:14:44.921665 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-08 01:14:44.921671 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-08 01:14:44.921678 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-08 01:14:44.921685 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-08 01:14:44.921692 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-08 01:14:44.921706 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-08 01:14:44.921716 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-08 01:14:44.921722 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-08 01:14:44.921729 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-08 01:14:44.921735 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-08 01:14:44.921741 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-08 01:14:44.921753 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-08 01:14:44.921760 | orchestrator | 2026-03-08 01:14:44.921767 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-03-08 01:14:44.921773 | orchestrator | Sunday 08 March 2026 01:10:30 +0000 (0:00:03.833) 0:04:37.393 ********** 2026-03-08 01:14:44.921784 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-08 01:14:44.921792 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-08 01:14:44.921799 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-08 01:14:44.921806 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:14:44.921812 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-08 01:14:44.921826 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-08 01:14:44.921836 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-08 01:14:44.921844 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-08 01:14:44.921850 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-08 01:14:44.921856 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:14:44.921863 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-08 01:14:44.921873 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:14:44.921882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-08 01:14:44.921894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-08 01:14:44.921901 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:14:44.921911 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-08 01:14:44.921917 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-08 01:14:44.921923 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:44.921929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-08 01:14:44.921937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-08 01:14:44.921948 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:44.921954 | orchestrator | 2026-03-08 01:14:44.921958 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-03-08 01:14:44.921962 | orchestrator | Sunday 08 March 2026 01:10:32 +0000 (0:00:01.665) 0:04:39.059 ********** 2026-03-08 01:14:44.921966 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-08 01:14:44.921972 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-08 01:14:44.922117 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-08 01:14:44.922127 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:14:44.922131 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-08 01:14:44.922135 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-08 01:14:44.922147 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-08 01:14:44.922151 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:14:44.922158 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-08 01:14:44.922162 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-08 01:14:44.922169 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-08 01:14:44.922173 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:14:44.922177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-08 01:14:44.922181 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-08 01:14:44.922187 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:14:44.922191 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-08 01:14:44.922195 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-08 01:14:44.922199 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:44.922205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-08 01:14:44.922212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-08 01:14:44.922216 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:44.922220 | orchestrator | 2026-03-08 01:14:44.922224 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-08 01:14:44.922228 | orchestrator | Sunday 08 March 2026 01:10:34 +0000 (0:00:02.374) 0:04:41.433 ********** 2026-03-08 01:14:44.922232 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:14:44.922236 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:44.922239 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:44.922243 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-08 01:14:44.922247 | orchestrator | 2026-03-08 01:14:44.922252 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-03-08 01:14:44.922259 | orchestrator | Sunday 08 March 2026 01:10:35 +0000 (0:00:01.106) 0:04:42.539 ********** 2026-03-08 01:14:44.922265 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-08 01:14:44.922276 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-08 01:14:44.922282 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-08 01:14:44.922288 | orchestrator | 2026-03-08 01:14:44.922294 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-03-08 01:14:44.922300 | orchestrator | Sunday 08 March 2026 01:10:36 +0000 (0:00:00.993) 0:04:43.533 ********** 2026-03-08 01:14:44.922305 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-08 01:14:44.922311 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-08 01:14:44.922316 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-08 01:14:44.922322 | orchestrator | 2026-03-08 01:14:44.922328 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-03-08 01:14:44.922333 | orchestrator | Sunday 08 March 2026 01:10:37 +0000 (0:00:01.050) 0:04:44.584 ********** 2026-03-08 01:14:44.922340 | orchestrator | ok: [testbed-node-3] 2026-03-08 01:14:44.922346 | orchestrator | ok: [testbed-node-4] 2026-03-08 01:14:44.922352 | orchestrator | ok: [testbed-node-5] 2026-03-08 01:14:44.922358 | orchestrator | 2026-03-08 01:14:44.922364 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-03-08 01:14:44.922368 | orchestrator | Sunday 08 March 2026 01:10:38 +0000 (0:00:00.566) 0:04:45.150 ********** 2026-03-08 01:14:44.922372 | orchestrator | ok: [testbed-node-3] 2026-03-08 01:14:44.922375 | orchestrator | ok: [testbed-node-4] 2026-03-08 01:14:44.922379 | orchestrator | ok: [testbed-node-5] 2026-03-08 01:14:44.922383 | orchestrator | 2026-03-08 01:14:44.922390 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-03-08 01:14:44.922396 | orchestrator | Sunday 08 March 2026 01:10:39 +0000 (0:00:00.755) 0:04:45.906 ********** 2026-03-08 01:14:44.922402 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-08 01:14:44.922409 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-08 01:14:44.922414 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-08 01:14:44.922421 | orchestrator | 2026-03-08 01:14:44.922427 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-03-08 01:14:44.922434 | orchestrator | Sunday 08 March 2026 01:10:40 +0000 (0:00:01.481) 0:04:47.387 ********** 2026-03-08 01:14:44.922440 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-08 01:14:44.922447 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-08 01:14:44.922459 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-08 01:14:44.922463 | orchestrator | 2026-03-08 01:14:44.922471 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-03-08 01:14:44.922475 | orchestrator | Sunday 08 March 2026 01:10:41 +0000 (0:00:01.171) 0:04:48.558 ********** 2026-03-08 01:14:44.922479 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-08 01:14:44.922483 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-08 01:14:44.922487 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-08 01:14:44.922490 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-03-08 01:14:44.922494 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-03-08 01:14:44.922498 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-03-08 01:14:44.922502 | orchestrator | 2026-03-08 01:14:44.922505 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-03-08 01:14:44.922509 | orchestrator | Sunday 08 March 2026 01:10:45 +0000 (0:00:03.987) 0:04:52.546 ********** 2026-03-08 01:14:44.922516 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:14:44.922520 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:14:44.922524 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:14:44.922528 | orchestrator | 2026-03-08 01:14:44.922531 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-03-08 01:14:44.922535 | orchestrator | Sunday 08 March 2026 01:10:46 +0000 (0:00:00.560) 0:04:53.106 ********** 2026-03-08 01:14:44.922539 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:14:44.922559 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:14:44.922566 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:14:44.922572 | orchestrator | 2026-03-08 01:14:44.922578 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-03-08 01:14:44.922584 | orchestrator | Sunday 08 March 2026 01:10:46 +0000 (0:00:00.351) 0:04:53.457 ********** 2026-03-08 01:14:44.922589 | orchestrator | changed: [testbed-node-3] 2026-03-08 01:14:44.922593 | orchestrator | changed: [testbed-node-4] 2026-03-08 01:14:44.922597 | orchestrator | changed: [testbed-node-5] 2026-03-08 01:14:44.922601 | orchestrator | 2026-03-08 01:14:44.922604 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-03-08 01:14:44.922608 | orchestrator | Sunday 08 March 2026 01:10:48 +0000 (0:00:01.314) 0:04:54.772 ********** 2026-03-08 01:14:44.922616 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-08 01:14:44.922620 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-08 01:14:44.922624 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-08 01:14:44.922628 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-08 01:14:44.922632 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-08 01:14:44.922636 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-08 01:14:44.922640 | orchestrator | 2026-03-08 01:14:44.922643 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-03-08 01:14:44.922647 | orchestrator | Sunday 08 March 2026 01:10:51 +0000 (0:00:03.340) 0:04:58.113 ********** 2026-03-08 01:14:44.922651 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-08 01:14:44.922655 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-08 01:14:44.922659 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-08 01:14:44.922662 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-08 01:14:44.922666 | orchestrator | changed: [testbed-node-3] 2026-03-08 01:14:44.922670 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-08 01:14:44.922673 | orchestrator | changed: [testbed-node-4] 2026-03-08 01:14:44.922677 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-08 01:14:44.922681 | orchestrator | changed: [testbed-node-5] 2026-03-08 01:14:44.922685 | orchestrator | 2026-03-08 01:14:44.922688 | orchestrator | TASK [nova-cell : Include tasks from qemu_wrapper.yml] ************************* 2026-03-08 01:14:44.922692 | orchestrator | Sunday 08 March 2026 01:10:54 +0000 (0:00:03.401) 0:05:01.515 ********** 2026-03-08 01:14:44.922696 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:44.922700 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:14:44.922703 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:44.922707 | orchestrator | included: /ansible/roles/nova-cell/tasks/qemu_wrapper.yml for testbed-node-3, testbed-node-5, testbed-node-4 2026-03-08 01:14:44.922711 | orchestrator | 2026-03-08 01:14:44.922715 | orchestrator | TASK [nova-cell : Check qemu wrapper file] ************************************* 2026-03-08 01:14:44.922718 | orchestrator | Sunday 08 March 2026 01:10:56 +0000 (0:00:01.690) 0:05:03.205 ********** 2026-03-08 01:14:44.922722 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-08 01:14:44.922726 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-08 01:14:44.922730 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-08 01:14:44.922734 | orchestrator | 2026-03-08 01:14:44.922737 | orchestrator | TASK [nova-cell : Copy qemu wrapper] ******************************************* 2026-03-08 01:14:44.922741 | orchestrator | Sunday 08 March 2026 01:10:57 +0000 (0:00:01.345) 0:05:04.550 ********** 2026-03-08 01:14:44.922748 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:14:44.922752 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:14:44.922756 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:14:44.922760 | orchestrator | 2026-03-08 01:14:44.922765 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-03-08 01:14:44.922770 | orchestrator | Sunday 08 March 2026 01:10:58 +0000 (0:00:00.335) 0:05:04.886 ********** 2026-03-08 01:14:44.922774 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:14:44.922779 | orchestrator | 2026-03-08 01:14:44.922784 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-03-08 01:14:44.922788 | orchestrator | Sunday 08 March 2026 01:10:58 +0000 (0:00:00.145) 0:05:05.032 ********** 2026-03-08 01:14:44.922792 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:14:44.922797 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:14:44.922801 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:14:44.922805 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:14:44.922810 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:44.922814 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:44.922819 | orchestrator | 2026-03-08 01:14:44.922823 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-03-08 01:14:44.922827 | orchestrator | Sunday 08 March 2026 01:10:59 +0000 (0:00:00.638) 0:05:05.670 ********** 2026-03-08 01:14:44.922831 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-08 01:14:44.922836 | orchestrator | 2026-03-08 01:14:44.922843 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-03-08 01:14:44.922848 | orchestrator | Sunday 08 March 2026 01:11:00 +0000 (0:00:01.031) 0:05:06.702 ********** 2026-03-08 01:14:44.922853 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:14:44.922857 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:14:44.922861 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:14:44.922866 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:14:44.922871 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:44.922875 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:44.922880 | orchestrator | 2026-03-08 01:14:44.922884 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-03-08 01:14:44.922888 | orchestrator | Sunday 08 March 2026 01:11:00 +0000 (0:00:00.634) 0:05:07.337 ********** 2026-03-08 01:14:44.922897 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-08 01:14:44.922903 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-08 01:14:44.922910 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-08 01:14:44.922915 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-08 01:14:44.922921 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-08 01:14:44.922925 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-08 01:14:44.922931 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-08 01:14:44.922936 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-08 01:14:44.922942 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-08 01:14:44.922946 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-08 01:14:44.922950 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-08 01:14:44.922956 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-08 01:14:44.922963 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-08 01:14:44.922967 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-08 01:14:44.922972 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-08 01:14:44.922978 | orchestrator | 2026-03-08 01:14:44.922982 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-03-08 01:14:44.922986 | orchestrator | Sunday 08 March 2026 01:11:04 +0000 (0:00:03.529) 0:05:10.867 ********** 2026-03-08 01:14:44.922990 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-08 01:14:44.922996 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-08 01:14:44.923001 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-08 01:14:44.923007 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-08 01:14:44.923011 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-08 01:14:44.923020 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-08 01:14:44.923024 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-08 01:14:44.923030 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-08 01:14:44.923036 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-08 01:14:44.923040 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-08 01:14:44.923047 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-08 01:14:44.923051 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-08 01:14:44.923055 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-08 01:14:44.923060 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-08 01:14:44.923064 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-08 01:14:44.923068 | orchestrator | 2026-03-08 01:14:44.923072 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-03-08 01:14:44.923076 | orchestrator | Sunday 08 March 2026 01:11:10 +0000 (0:00:06.518) 0:05:17.385 ********** 2026-03-08 01:14:44.923080 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:14:44.923084 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:14:44.923088 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:14:44.923091 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:14:44.923097 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:44.923101 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:44.923107 | orchestrator | 2026-03-08 01:14:44.923111 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-03-08 01:14:44.923115 | orchestrator | Sunday 08 March 2026 01:11:12 +0000 (0:00:01.819) 0:05:19.205 ********** 2026-03-08 01:14:44.923119 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-08 01:14:44.923123 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-08 01:14:44.923127 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-08 01:14:44.923130 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-08 01:14:44.923134 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-08 01:14:44.923138 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-08 01:14:44.923142 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:44.923145 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-08 01:14:44.923149 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-08 01:14:44.923153 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:14:44.923157 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-08 01:14:44.923160 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:44.923164 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-08 01:14:44.923168 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-08 01:14:44.923172 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-08 01:14:44.923176 | orchestrator | 2026-03-08 01:14:44.923180 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-03-08 01:14:44.923183 | orchestrator | Sunday 08 March 2026 01:11:16 +0000 (0:00:03.522) 0:05:22.727 ********** 2026-03-08 01:14:44.923187 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:14:44.923191 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:14:44.923195 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:14:44.923198 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:14:44.923202 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:44.923206 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:44.923210 | orchestrator | 2026-03-08 01:14:44.923214 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-03-08 01:14:44.923218 | orchestrator | Sunday 08 March 2026 01:11:16 +0000 (0:00:00.589) 0:05:23.317 ********** 2026-03-08 01:14:44.923222 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-08 01:14:44.923229 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-08 01:14:44.923237 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-08 01:14:44.923247 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-08 01:14:44.923252 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-08 01:14:44.923258 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-08 01:14:44.923263 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-08 01:14:44.923269 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-08 01:14:44.923274 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-08 01:14:44.923287 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-08 01:14:44.923293 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:14:44.923299 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-08 01:14:44.923306 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:44.923310 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-08 01:14:44.923314 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:44.923318 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-08 01:14:44.923322 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-08 01:14:44.923325 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-08 01:14:44.923329 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-08 01:14:44.923336 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-08 01:14:44.923340 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-08 01:14:44.923344 | orchestrator | 2026-03-08 01:14:44.923348 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-03-08 01:14:44.923352 | orchestrator | Sunday 08 March 2026 01:11:22 +0000 (0:00:05.895) 0:05:29.213 ********** 2026-03-08 01:14:44.923355 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-08 01:14:44.923359 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-08 01:14:44.923363 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-08 01:14:44.923367 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-08 01:14:44.923371 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-08 01:14:44.923374 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-08 01:14:44.923378 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-08 01:14:44.923382 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-08 01:14:44.923386 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-08 01:14:44.923389 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-08 01:14:44.923393 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-08 01:14:44.923397 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-08 01:14:44.923401 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-08 01:14:44.923405 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:44.923409 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-08 01:14:44.923412 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:14:44.923416 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-08 01:14:44.923420 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-08 01:14:44.923424 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-08 01:14:44.923431 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-08 01:14:44.923435 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:44.923439 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-08 01:14:44.923443 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-08 01:14:44.923447 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-08 01:14:44.923450 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-08 01:14:44.923454 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-08 01:14:44.923458 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-08 01:14:44.923462 | orchestrator | 2026-03-08 01:14:44.923466 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-03-08 01:14:44.923472 | orchestrator | Sunday 08 March 2026 01:11:29 +0000 (0:00:06.699) 0:05:35.913 ********** 2026-03-08 01:14:44.923481 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:14:44.923488 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:14:44.923494 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:14:44.923500 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:14:44.923507 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:44.923513 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:44.923518 | orchestrator | 2026-03-08 01:14:44.923527 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-03-08 01:14:44.923534 | orchestrator | Sunday 08 March 2026 01:11:30 +0000 (0:00:00.861) 0:05:36.775 ********** 2026-03-08 01:14:44.923541 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:14:44.923575 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:14:44.923581 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:14:44.923588 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:14:44.923594 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:44.923601 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:44.923607 | orchestrator | 2026-03-08 01:14:44.923614 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-03-08 01:14:44.923620 | orchestrator | Sunday 08 March 2026 01:11:30 +0000 (0:00:00.655) 0:05:37.430 ********** 2026-03-08 01:14:44.923627 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:14:44.923633 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:44.923639 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:44.923643 | orchestrator | changed: [testbed-node-3] 2026-03-08 01:14:44.923647 | orchestrator | changed: [testbed-node-4] 2026-03-08 01:14:44.923651 | orchestrator | changed: [testbed-node-5] 2026-03-08 01:14:44.923655 | orchestrator | 2026-03-08 01:14:44.923658 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-03-08 01:14:44.923662 | orchestrator | Sunday 08 March 2026 01:11:32 +0000 (0:00:02.009) 0:05:39.439 ********** 2026-03-08 01:14:44.923671 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-08 01:14:44.923676 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-08 01:14:44.923684 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-08 01:14:44.923688 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:14:44.923692 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-08 01:14:44.923699 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-08 01:14:44.923707 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-08 01:14:44.923711 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:14:44.923715 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-08 01:14:44.923722 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-08 01:14:44.923726 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-08 01:14:44.923730 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:14:44.923736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-08 01:14:44.923740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-08 01:14:44.923745 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:44.923752 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-08 01:14:44.923757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-08 01:14:44.923764 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:14:44.923768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-08 01:14:44.923772 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-08 01:14:44.923775 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:44.923779 | orchestrator | 2026-03-08 01:14:44.923783 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-03-08 01:14:44.923787 | orchestrator | Sunday 08 March 2026 01:11:34 +0000 (0:00:01.639) 0:05:41.079 ********** 2026-03-08 01:14:44.923791 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-03-08 01:14:44.923794 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-03-08 01:14:44.923798 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:14:44.923802 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-03-08 01:14:44.923806 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-03-08 01:14:44.923810 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:14:44.923813 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-03-08 01:14:44.923817 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-03-08 01:14:44.923821 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:14:44.923825 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-03-08 01:14:44.923829 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-03-08 01:14:44.923832 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:14:44.923836 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-03-08 01:14:44.923842 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-03-08 01:14:44.923846 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:44.923850 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-03-08 01:14:44.923854 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-03-08 01:14:44.923858 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:44.923861 | orchestrator | 2026-03-08 01:14:44.923865 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2026-03-08 01:14:44.923869 | orchestrator | Sunday 08 March 2026 01:11:35 +0000 (0:00:00.947) 0:05:42.027 ********** 2026-03-08 01:14:44.923876 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-08 01:14:44.923885 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-08 01:14:44.923889 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-08 01:14:44.923893 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-08 01:14:44.923899 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-08 01:14:44.923903 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-08 01:14:44.923912 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-08 01:14:44.923916 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-08 01:14:44.923920 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-08 01:14:44.923924 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-08 01:14:44.923928 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-08 01:14:44.923934 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-08 01:14:44.923974 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-08 01:14:44.923980 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-08 01:14:44.923984 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-08 01:14:44.923988 | orchestrator | 2026-03-08 01:14:44.923992 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-08 01:14:44.923996 | orchestrator | Sunday 08 March 2026 01:11:38 +0000 (0:00:02.862) 0:05:44.890 ********** 2026-03-08 01:14:44.924000 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:14:44.924003 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:14:44.924007 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:14:44.924011 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:14:44.924015 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:44.924018 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:44.924022 | orchestrator | 2026-03-08 01:14:44.924026 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-08 01:14:44.924030 | orchestrator | Sunday 08 March 2026 01:11:39 +0000 (0:00:00.847) 0:05:45.737 ********** 2026-03-08 01:14:44.924034 | orchestrator | 2026-03-08 01:14:44.924037 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-08 01:14:44.924041 | orchestrator | Sunday 08 March 2026 01:11:39 +0000 (0:00:00.158) 0:05:45.896 ********** 2026-03-08 01:14:44.924045 | orchestrator | 2026-03-08 01:14:44.924049 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-08 01:14:44.924053 | orchestrator | Sunday 08 March 2026 01:11:39 +0000 (0:00:00.164) 0:05:46.061 ********** 2026-03-08 01:14:44.924057 | orchestrator | 2026-03-08 01:14:44.924061 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-08 01:14:44.924064 | orchestrator | Sunday 08 March 2026 01:11:39 +0000 (0:00:00.134) 0:05:46.195 ********** 2026-03-08 01:14:44.924068 | orchestrator | 2026-03-08 01:14:44.924072 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-08 01:14:44.924076 | orchestrator | Sunday 08 March 2026 01:11:39 +0000 (0:00:00.292) 0:05:46.488 ********** 2026-03-08 01:14:44.924082 | orchestrator | 2026-03-08 01:14:44.924086 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-08 01:14:44.924090 | orchestrator | Sunday 08 March 2026 01:11:39 +0000 (0:00:00.129) 0:05:46.617 ********** 2026-03-08 01:14:44.924093 | orchestrator | 2026-03-08 01:14:44.924097 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-03-08 01:14:44.924101 | orchestrator | Sunday 08 March 2026 01:11:40 +0000 (0:00:00.168) 0:05:46.785 ********** 2026-03-08 01:14:44.924105 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:14:44.924108 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:14:44.924112 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:14:44.924116 | orchestrator | 2026-03-08 01:14:44.924122 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-03-08 01:14:44.924126 | orchestrator | Sunday 08 March 2026 01:11:47 +0000 (0:00:07.617) 0:05:54.402 ********** 2026-03-08 01:14:44.924133 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:14:44.924138 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:14:44.924149 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:14:44.924155 | orchestrator | 2026-03-08 01:14:44.924162 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-03-08 01:14:44.924168 | orchestrator | Sunday 08 March 2026 01:11:59 +0000 (0:00:11.873) 0:06:06.276 ********** 2026-03-08 01:14:44.924174 | orchestrator | changed: [testbed-node-3] 2026-03-08 01:14:44.924180 | orchestrator | changed: [testbed-node-4] 2026-03-08 01:14:44.924186 | orchestrator | changed: [testbed-node-5] 2026-03-08 01:14:44.924193 | orchestrator | 2026-03-08 01:14:44.924199 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-03-08 01:14:44.924206 | orchestrator | Sunday 08 March 2026 01:12:21 +0000 (0:00:21.403) 0:06:27.679 ********** 2026-03-08 01:14:44.924213 | orchestrator | changed: [testbed-node-4] 2026-03-08 01:14:44.924219 | orchestrator | changed: [testbed-node-3] 2026-03-08 01:14:44.924226 | orchestrator | changed: [testbed-node-5] 2026-03-08 01:14:44.924233 | orchestrator | 2026-03-08 01:14:44.924239 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-03-08 01:14:44.924246 | orchestrator | Sunday 08 March 2026 01:12:55 +0000 (0:00:34.317) 0:07:01.997 ********** 2026-03-08 01:14:44.924258 | orchestrator | changed: [testbed-node-3] 2026-03-08 01:14:44.924265 | orchestrator | FAILED - RETRYING: [testbed-node-5]: Checking libvirt container is ready (10 retries left). 2026-03-08 01:14:44.924272 | orchestrator | changed: [testbed-node-4] 2026-03-08 01:14:44.924280 | orchestrator | changed: [testbed-node-5] 2026-03-08 01:14:44.924287 | orchestrator | 2026-03-08 01:14:44.924294 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-03-08 01:14:44.924301 | orchestrator | Sunday 08 March 2026 01:13:01 +0000 (0:00:06.048) 0:07:08.045 ********** 2026-03-08 01:14:44.924308 | orchestrator | changed: [testbed-node-3] 2026-03-08 01:14:44.924315 | orchestrator | changed: [testbed-node-4] 2026-03-08 01:14:44.924321 | orchestrator | changed: [testbed-node-5] 2026-03-08 01:14:44.924328 | orchestrator | 2026-03-08 01:14:44.924334 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-03-08 01:14:44.924341 | orchestrator | Sunday 08 March 2026 01:13:02 +0000 (0:00:00.758) 0:07:08.804 ********** 2026-03-08 01:14:44.924347 | orchestrator | changed: [testbed-node-3] 2026-03-08 01:14:44.924354 | orchestrator | changed: [testbed-node-5] 2026-03-08 01:14:44.924361 | orchestrator | changed: [testbed-node-4] 2026-03-08 01:14:44.924367 | orchestrator | 2026-03-08 01:14:44.924374 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-03-08 01:14:44.924380 | orchestrator | Sunday 08 March 2026 01:13:27 +0000 (0:00:25.494) 0:07:34.298 ********** 2026-03-08 01:14:44.924387 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:14:44.924393 | orchestrator | 2026-03-08 01:14:44.924400 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-03-08 01:14:44.924407 | orchestrator | Sunday 08 March 2026 01:13:27 +0000 (0:00:00.137) 0:07:34.436 ********** 2026-03-08 01:14:44.924419 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:14:44.924426 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:14:44.924433 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:44.924440 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:44.924446 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:14:44.924453 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-03-08 01:14:44.924460 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-08 01:14:44.924466 | orchestrator | 2026-03-08 01:14:44.924473 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-03-08 01:14:44.924479 | orchestrator | Sunday 08 March 2026 01:13:50 +0000 (0:00:22.451) 0:07:56.888 ********** 2026-03-08 01:14:44.924486 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:44.924493 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:14:44.924500 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:14:44.924506 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:44.924513 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:14:44.924520 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:14:44.924526 | orchestrator | 2026-03-08 01:14:44.924533 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-03-08 01:14:44.924540 | orchestrator | Sunday 08 March 2026 01:14:00 +0000 (0:00:10.132) 0:08:07.020 ********** 2026-03-08 01:14:44.924559 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:14:44.924566 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:14:44.924572 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:14:44.924579 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:44.924585 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:44.924592 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-5 2026-03-08 01:14:44.924598 | orchestrator | 2026-03-08 01:14:44.924604 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-08 01:14:44.924611 | orchestrator | Sunday 08 March 2026 01:14:04 +0000 (0:00:04.000) 0:08:11.020 ********** 2026-03-08 01:14:44.924618 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-08 01:14:44.924625 | orchestrator | 2026-03-08 01:14:44.924632 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-08 01:14:44.924638 | orchestrator | Sunday 08 March 2026 01:14:20 +0000 (0:00:15.690) 0:08:26.711 ********** 2026-03-08 01:14:44.924646 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-08 01:14:44.924652 | orchestrator | 2026-03-08 01:14:44.924659 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-03-08 01:14:44.924666 | orchestrator | Sunday 08 March 2026 01:14:21 +0000 (0:00:01.542) 0:08:28.254 ********** 2026-03-08 01:14:44.924672 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:14:44.924678 | orchestrator | 2026-03-08 01:14:44.924684 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-03-08 01:14:44.924695 | orchestrator | Sunday 08 March 2026 01:14:23 +0000 (0:00:01.466) 0:08:29.721 ********** 2026-03-08 01:14:44.924701 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-08 01:14:44.924708 | orchestrator | 2026-03-08 01:14:44.924714 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2026-03-08 01:14:44.924721 | orchestrator | Sunday 08 March 2026 01:14:36 +0000 (0:00:13.710) 0:08:43.432 ********** 2026-03-08 01:14:44.924727 | orchestrator | ok: [testbed-node-3] 2026-03-08 01:14:44.924734 | orchestrator | ok: [testbed-node-4] 2026-03-08 01:14:44.924740 | orchestrator | ok: [testbed-node-5] 2026-03-08 01:14:44.924746 | orchestrator | ok: [testbed-node-1] 2026-03-08 01:14:44.924752 | orchestrator | ok: [testbed-node-2] 2026-03-08 01:14:44.924758 | orchestrator | ok: [testbed-node-0] 2026-03-08 01:14:44.924764 | orchestrator | 2026-03-08 01:14:44.924771 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-03-08 01:14:44.924781 | orchestrator | 2026-03-08 01:14:44.924787 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-03-08 01:14:44.924794 | orchestrator | Sunday 08 March 2026 01:14:38 +0000 (0:00:01.943) 0:08:45.375 ********** 2026-03-08 01:14:44.924800 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:14:44.924806 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:14:44.924813 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:14:44.924819 | orchestrator | 2026-03-08 01:14:44.924826 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-03-08 01:14:44.924832 | orchestrator | 2026-03-08 01:14:44.924844 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-03-08 01:14:44.924851 | orchestrator | Sunday 08 March 2026 01:14:39 +0000 (0:00:01.101) 0:08:46.477 ********** 2026-03-08 01:14:44.924857 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:14:44.924864 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:44.924870 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:44.924877 | orchestrator | 2026-03-08 01:14:44.924884 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-03-08 01:14:44.924890 | orchestrator | 2026-03-08 01:14:44.924896 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-03-08 01:14:44.924903 | orchestrator | Sunday 08 March 2026 01:14:40 +0000 (0:00:00.505) 0:08:46.983 ********** 2026-03-08 01:14:44.924910 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-03-08 01:14:44.924917 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-03-08 01:14:44.924923 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-03-08 01:14:44.924930 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-03-08 01:14:44.924937 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-03-08 01:14:44.924944 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-03-08 01:14:44.924950 | orchestrator | skipping: [testbed-node-3] 2026-03-08 01:14:44.924957 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-03-08 01:14:44.924963 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-03-08 01:14:44.924969 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-03-08 01:14:44.924975 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-03-08 01:14:44.924982 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-03-08 01:14:44.924988 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-03-08 01:14:44.924995 | orchestrator | skipping: [testbed-node-4] 2026-03-08 01:14:44.925001 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-03-08 01:14:44.925007 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-03-08 01:14:44.925014 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-03-08 01:14:44.925020 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-03-08 01:14:44.925026 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-03-08 01:14:44.925032 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-03-08 01:14:44.925038 | orchestrator | skipping: [testbed-node-5] 2026-03-08 01:14:44.925044 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-03-08 01:14:44.925050 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-03-08 01:14:44.925056 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-03-08 01:14:44.925063 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-03-08 01:14:44.925069 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-03-08 01:14:44.925075 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-03-08 01:14:44.925081 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:14:44.925087 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-03-08 01:14:44.925098 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-03-08 01:14:44.925105 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-03-08 01:14:44.925111 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-03-08 01:14:44.925117 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-03-08 01:14:44.925123 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-03-08 01:14:44.925130 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:44.925136 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-03-08 01:14:44.925142 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-03-08 01:14:44.925149 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-03-08 01:14:44.925156 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-03-08 01:14:44.925162 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-03-08 01:14:44.925169 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-03-08 01:14:44.925175 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:44.925182 | orchestrator | 2026-03-08 01:14:44.925189 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-03-08 01:14:44.925196 | orchestrator | 2026-03-08 01:14:44.925205 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-03-08 01:14:44.925212 | orchestrator | Sunday 08 March 2026 01:14:41 +0000 (0:00:01.209) 0:08:48.192 ********** 2026-03-08 01:14:44.925219 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-03-08 01:14:44.925225 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-03-08 01:14:44.925231 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:14:44.925238 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-03-08 01:14:44.925244 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-03-08 01:14:44.925251 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:44.925257 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-03-08 01:14:44.925263 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-03-08 01:14:44.925269 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:44.925276 | orchestrator | 2026-03-08 01:14:44.925282 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-03-08 01:14:44.925288 | orchestrator | 2026-03-08 01:14:44.925295 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-03-08 01:14:44.925301 | orchestrator | Sunday 08 March 2026 01:14:42 +0000 (0:00:00.671) 0:08:48.864 ********** 2026-03-08 01:14:44.925307 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:14:44.925313 | orchestrator | 2026-03-08 01:14:44.925323 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-03-08 01:14:44.925329 | orchestrator | 2026-03-08 01:14:44.925335 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-03-08 01:14:44.925342 | orchestrator | Sunday 08 March 2026 01:14:42 +0000 (0:00:00.596) 0:08:49.461 ********** 2026-03-08 01:14:44.925348 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:14:44.925354 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:14:44.925360 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:14:44.925366 | orchestrator | 2026-03-08 01:14:44.925373 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 01:14:44.925379 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 01:14:44.925386 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=45  rescued=0 ignored=0 2026-03-08 01:14:44.925393 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=52  rescued=0 ignored=0 2026-03-08 01:14:44.925403 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=52  rescued=0 ignored=0 2026-03-08 01:14:44.925410 | orchestrator | testbed-node-3 : ok=40  changed=27  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-03-08 01:14:44.925417 | orchestrator | testbed-node-4 : ok=39  changed=27  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-08 01:14:44.925423 | orchestrator | testbed-node-5 : ok=44  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-03-08 01:14:44.925430 | orchestrator | 2026-03-08 01:14:44.925436 | orchestrator | 2026-03-08 01:14:44.925443 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 01:14:44.925449 | orchestrator | Sunday 08 March 2026 01:14:43 +0000 (0:00:00.538) 0:08:50.000 ********** 2026-03-08 01:14:44.925455 | orchestrator | =============================================================================== 2026-03-08 01:14:44.925462 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 35.26s 2026-03-08 01:14:44.925468 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 34.32s 2026-03-08 01:14:44.925475 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 25.49s 2026-03-08 01:14:44.925482 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 23.04s 2026-03-08 01:14:44.925489 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 22.45s 2026-03-08 01:14:44.925495 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 21.40s 2026-03-08 01:14:44.925502 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 18.69s 2026-03-08 01:14:44.925509 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 18.17s 2026-03-08 01:14:44.925515 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 15.69s 2026-03-08 01:14:44.925521 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 14.97s 2026-03-08 01:14:44.925528 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 14.47s 2026-03-08 01:14:44.925534 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 14.14s 2026-03-08 01:14:44.925540 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 13.71s 2026-03-08 01:14:44.925556 | orchestrator | nova-cell : Create cell ------------------------------------------------ 13.70s 2026-03-08 01:14:44.925563 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 11.87s 2026-03-08 01:14:44.925570 | orchestrator | nova : Restart nova-api container -------------------------------------- 10.17s 2026-03-08 01:14:44.925583 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------ 10.13s 2026-03-08 01:14:44.925593 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 9.50s 2026-03-08 01:14:44.925606 | orchestrator | nova-cell : Restart nova-conductor container ---------------------------- 7.62s 2026-03-08 01:14:44.925612 | orchestrator | service-ks-register : nova | Creating endpoints ------------------------- 6.81s 2026-03-08 01:14:44.925618 | orchestrator | 2026-03-08 01:14:44 | INFO  | Task 65d4a182-4349-4a47-951e-a853d1ee562d is in state STARTED 2026-03-08 01:14:44.925625 | orchestrator | 2026-03-08 01:14:44 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:14:47.969707 | orchestrator | 2026-03-08 01:14:47 | INFO  | Task 65d4a182-4349-4a47-951e-a853d1ee562d is in state STARTED 2026-03-08 01:14:47.969768 | orchestrator | 2026-03-08 01:14:47 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:14:51.015435 | orchestrator | 2026-03-08 01:14:51 | INFO  | Task 65d4a182-4349-4a47-951e-a853d1ee562d is in state STARTED 2026-03-08 01:14:51.015492 | orchestrator | 2026-03-08 01:14:51 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:14:54.053658 | orchestrator | 2026-03-08 01:14:54 | INFO  | Task 65d4a182-4349-4a47-951e-a853d1ee562d is in state STARTED 2026-03-08 01:14:54.053728 | orchestrator | 2026-03-08 01:14:54 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:14:57.094746 | orchestrator | 2026-03-08 01:14:57 | INFO  | Task 65d4a182-4349-4a47-951e-a853d1ee562d is in state STARTED 2026-03-08 01:14:57.094799 | orchestrator | 2026-03-08 01:14:57 | INFO  | Wait 1 second(s) until the next check 2026-03-08 01:15:00.143328 | orchestrator | 2026-03-08 01:15:00 | INFO  | Task 65d4a182-4349-4a47-951e-a853d1ee562d is in state SUCCESS 2026-03-08 01:15:00.145741 | orchestrator | 2026-03-08 01:15:00.145784 | orchestrator | 2026-03-08 01:15:00.145790 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-08 01:15:00.145795 | orchestrator | 2026-03-08 01:15:00.145799 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-08 01:15:00.145803 | orchestrator | Sunday 08 March 2026 01:10:13 +0000 (0:00:00.296) 0:00:00.296 ********** 2026-03-08 01:15:00.145807 | orchestrator | ok: [testbed-node-0] 2026-03-08 01:15:00.145811 | orchestrator | ok: [testbed-node-1] 2026-03-08 01:15:00.145815 | orchestrator | ok: [testbed-node-2] 2026-03-08 01:15:00.145839 | orchestrator | 2026-03-08 01:15:00.145844 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-08 01:15:00.145850 | orchestrator | Sunday 08 March 2026 01:10:13 +0000 (0:00:00.306) 0:00:00.603 ********** 2026-03-08 01:15:00.145854 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-03-08 01:15:00.145858 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-03-08 01:15:00.145862 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-03-08 01:15:00.145866 | orchestrator | 2026-03-08 01:15:00.145870 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-03-08 01:15:00.145874 | orchestrator | 2026-03-08 01:15:00.145878 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-08 01:15:00.145882 | orchestrator | Sunday 08 March 2026 01:10:14 +0000 (0:00:00.456) 0:00:01.059 ********** 2026-03-08 01:15:00.145886 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 01:15:00.145890 | orchestrator | 2026-03-08 01:15:00.145894 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2026-03-08 01:15:00.145898 | orchestrator | Sunday 08 March 2026 01:10:15 +0000 (0:00:00.665) 0:00:01.725 ********** 2026-03-08 01:15:00.145902 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-03-08 01:15:00.145906 | orchestrator | 2026-03-08 01:15:00.145909 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2026-03-08 01:15:00.145913 | orchestrator | Sunday 08 March 2026 01:10:18 +0000 (0:00:03.424) 0:00:05.150 ********** 2026-03-08 01:15:00.145917 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-03-08 01:15:00.145921 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-03-08 01:15:00.145925 | orchestrator | 2026-03-08 01:15:00.145928 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-03-08 01:15:00.145932 | orchestrator | Sunday 08 March 2026 01:10:25 +0000 (0:00:06.976) 0:00:12.126 ********** 2026-03-08 01:15:00.145936 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-08 01:15:00.145940 | orchestrator | 2026-03-08 01:15:00.145944 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-03-08 01:15:00.145947 | orchestrator | Sunday 08 March 2026 01:10:29 +0000 (0:00:03.582) 0:00:15.709 ********** 2026-03-08 01:15:00.145951 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-03-08 01:15:00.145955 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-03-08 01:15:00.145959 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-08 01:15:00.145974 | orchestrator | 2026-03-08 01:15:00.145978 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-03-08 01:15:00.145982 | orchestrator | Sunday 08 March 2026 01:10:37 +0000 (0:00:08.155) 0:00:23.865 ********** 2026-03-08 01:15:00.145986 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-08 01:15:00.145990 | orchestrator | 2026-03-08 01:15:00.145994 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2026-03-08 01:15:00.145997 | orchestrator | Sunday 08 March 2026 01:10:41 +0000 (0:00:03.949) 0:00:27.815 ********** 2026-03-08 01:15:00.146007 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-03-08 01:15:00.146175 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-03-08 01:15:00.146181 | orchestrator | 2026-03-08 01:15:00.146185 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-03-08 01:15:00.146189 | orchestrator | Sunday 08 March 2026 01:10:48 +0000 (0:00:07.136) 0:00:34.951 ********** 2026-03-08 01:15:00.146193 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-03-08 01:15:00.146197 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-03-08 01:15:00.146200 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-03-08 01:15:00.146208 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-03-08 01:15:00.146212 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-03-08 01:15:00.146216 | orchestrator | 2026-03-08 01:15:00.146220 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-08 01:15:00.146223 | orchestrator | Sunday 08 March 2026 01:11:03 +0000 (0:00:15.094) 0:00:50.046 ********** 2026-03-08 01:15:00.146227 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 01:15:00.146231 | orchestrator | 2026-03-08 01:15:00.146235 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-03-08 01:15:00.146239 | orchestrator | Sunday 08 March 2026 01:11:03 +0000 (0:00:00.610) 0:00:50.657 ********** 2026-03-08 01:15:00.146242 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:15:00.146246 | orchestrator | 2026-03-08 01:15:00.146250 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-03-08 01:15:00.146254 | orchestrator | Sunday 08 March 2026 01:11:09 +0000 (0:00:05.744) 0:00:56.401 ********** 2026-03-08 01:15:00.146258 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:15:00.146261 | orchestrator | 2026-03-08 01:15:00.146265 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-03-08 01:15:00.146275 | orchestrator | Sunday 08 March 2026 01:11:14 +0000 (0:00:04.787) 0:01:01.189 ********** 2026-03-08 01:15:00.146279 | orchestrator | ok: [testbed-node-0] 2026-03-08 01:15:00.146283 | orchestrator | 2026-03-08 01:15:00.146287 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-03-08 01:15:00.146291 | orchestrator | Sunday 08 March 2026 01:11:17 +0000 (0:00:03.132) 0:01:04.321 ********** 2026-03-08 01:15:00.146294 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-03-08 01:15:00.146298 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-03-08 01:15:00.146302 | orchestrator | 2026-03-08 01:15:00.146306 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-03-08 01:15:00.146310 | orchestrator | Sunday 08 March 2026 01:11:28 +0000 (0:00:10.786) 0:01:15.108 ********** 2026-03-08 01:15:00.146314 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-03-08 01:15:00.146318 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-03-08 01:15:00.146322 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-03-08 01:15:00.146331 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-03-08 01:15:00.146335 | orchestrator | 2026-03-08 01:15:00.146339 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-03-08 01:15:00.146343 | orchestrator | Sunday 08 March 2026 01:11:44 +0000 (0:00:15.907) 0:01:31.015 ********** 2026-03-08 01:15:00.146346 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:15:00.146350 | orchestrator | 2026-03-08 01:15:00.146356 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-03-08 01:15:00.146362 | orchestrator | Sunday 08 March 2026 01:11:50 +0000 (0:00:05.763) 0:01:36.778 ********** 2026-03-08 01:15:00.146368 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:15:00.146374 | orchestrator | 2026-03-08 01:15:00.146397 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-03-08 01:15:00.146403 | orchestrator | Sunday 08 March 2026 01:11:55 +0000 (0:00:05.051) 0:01:41.830 ********** 2026-03-08 01:15:00.146408 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:15:00.146415 | orchestrator | 2026-03-08 01:15:00.146430 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-03-08 01:15:00.146437 | orchestrator | Sunday 08 March 2026 01:11:55 +0000 (0:00:00.211) 0:01:42.042 ********** 2026-03-08 01:15:00.146444 | orchestrator | ok: [testbed-node-0] 2026-03-08 01:15:00.146450 | orchestrator | 2026-03-08 01:15:00.146456 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-08 01:15:00.146460 | orchestrator | Sunday 08 March 2026 01:11:59 +0000 (0:00:03.676) 0:01:45.718 ********** 2026-03-08 01:15:00.146463 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 01:15:00.146467 | orchestrator | 2026-03-08 01:15:00.146471 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-03-08 01:15:00.146475 | orchestrator | Sunday 08 March 2026 01:12:00 +0000 (0:00:01.083) 0:01:46.801 ********** 2026-03-08 01:15:00.146480 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:15:00.146487 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:15:00.146493 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:15:00.146499 | orchestrator | 2026-03-08 01:15:00.146505 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-03-08 01:15:00.146512 | orchestrator | Sunday 08 March 2026 01:12:06 +0000 (0:00:06.117) 0:01:52.918 ********** 2026-03-08 01:15:00.146518 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:15:00.146524 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:15:00.146536 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:15:00.146542 | orchestrator | 2026-03-08 01:15:00.146549 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-03-08 01:15:00.146553 | orchestrator | Sunday 08 March 2026 01:12:10 +0000 (0:00:04.415) 0:01:57.334 ********** 2026-03-08 01:15:00.146557 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:15:00.146561 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:15:00.146564 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:15:00.146568 | orchestrator | 2026-03-08 01:15:00.146572 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-03-08 01:15:00.146592 | orchestrator | Sunday 08 March 2026 01:12:11 +0000 (0:00:00.727) 0:01:58.062 ********** 2026-03-08 01:15:00.146600 | orchestrator | ok: [testbed-node-0] 2026-03-08 01:15:00.146607 | orchestrator | ok: [testbed-node-1] 2026-03-08 01:15:00.146613 | orchestrator | ok: [testbed-node-2] 2026-03-08 01:15:00.146619 | orchestrator | 2026-03-08 01:15:00.146626 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-03-08 01:15:00.146632 | orchestrator | Sunday 08 March 2026 01:12:13 +0000 (0:00:01.951) 0:02:00.013 ********** 2026-03-08 01:15:00.146639 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:15:00.146645 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:15:00.146651 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:15:00.146663 | orchestrator | 2026-03-08 01:15:00.146684 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-03-08 01:15:00.146692 | orchestrator | Sunday 08 March 2026 01:12:14 +0000 (0:00:01.279) 0:02:01.293 ********** 2026-03-08 01:15:00.146698 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:15:00.146705 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:15:00.146711 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:15:00.146717 | orchestrator | 2026-03-08 01:15:00.146724 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-03-08 01:15:00.146730 | orchestrator | Sunday 08 March 2026 01:12:15 +0000 (0:00:01.245) 0:02:02.538 ********** 2026-03-08 01:15:00.146736 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:15:00.146743 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:15:00.146749 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:15:00.146755 | orchestrator | 2026-03-08 01:15:00.146768 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-03-08 01:15:00.146774 | orchestrator | Sunday 08 March 2026 01:12:17 +0000 (0:00:01.872) 0:02:04.411 ********** 2026-03-08 01:15:00.146781 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:15:00.146787 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:15:00.146793 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:15:00.146800 | orchestrator | 2026-03-08 01:15:00.146807 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-03-08 01:15:00.146814 | orchestrator | Sunday 08 March 2026 01:12:19 +0000 (0:00:01.897) 0:02:06.309 ********** 2026-03-08 01:15:00.146820 | orchestrator | ok: [testbed-node-0] 2026-03-08 01:15:00.146827 | orchestrator | ok: [testbed-node-1] 2026-03-08 01:15:00.146833 | orchestrator | ok: [testbed-node-2] 2026-03-08 01:15:00.146840 | orchestrator | 2026-03-08 01:15:00.146846 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-03-08 01:15:00.146852 | orchestrator | Sunday 08 March 2026 01:12:20 +0000 (0:00:00.785) 0:02:07.094 ********** 2026-03-08 01:15:00.146859 | orchestrator | ok: [testbed-node-0] 2026-03-08 01:15:00.146865 | orchestrator | ok: [testbed-node-2] 2026-03-08 01:15:00.146872 | orchestrator | ok: [testbed-node-1] 2026-03-08 01:15:00.146878 | orchestrator | 2026-03-08 01:15:00.146885 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-08 01:15:00.146892 | orchestrator | Sunday 08 March 2026 01:12:23 +0000 (0:00:03.047) 0:02:10.141 ********** 2026-03-08 01:15:00.146898 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 01:15:00.146905 | orchestrator | 2026-03-08 01:15:00.146911 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-03-08 01:15:00.146918 | orchestrator | Sunday 08 March 2026 01:12:24 +0000 (0:00:00.796) 0:02:10.938 ********** 2026-03-08 01:15:00.146924 | orchestrator | ok: [testbed-node-0] 2026-03-08 01:15:00.146930 | orchestrator | 2026-03-08 01:15:00.146937 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-03-08 01:15:00.146943 | orchestrator | Sunday 08 March 2026 01:12:28 +0000 (0:00:03.930) 0:02:14.869 ********** 2026-03-08 01:15:00.146950 | orchestrator | ok: [testbed-node-0] 2026-03-08 01:15:00.146956 | orchestrator | 2026-03-08 01:15:00.146963 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-03-08 01:15:00.146970 | orchestrator | Sunday 08 March 2026 01:12:31 +0000 (0:00:03.538) 0:02:18.408 ********** 2026-03-08 01:15:00.146976 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-03-08 01:15:00.146983 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-03-08 01:15:00.146989 | orchestrator | 2026-03-08 01:15:00.146996 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-03-08 01:15:00.147002 | orchestrator | Sunday 08 March 2026 01:12:38 +0000 (0:00:07.066) 0:02:25.474 ********** 2026-03-08 01:15:00.147008 | orchestrator | ok: [testbed-node-0] 2026-03-08 01:15:00.147015 | orchestrator | 2026-03-08 01:15:00.147021 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-03-08 01:15:00.147033 | orchestrator | Sunday 08 March 2026 01:12:42 +0000 (0:00:03.215) 0:02:28.689 ********** 2026-03-08 01:15:00.147039 | orchestrator | ok: [testbed-node-0] 2026-03-08 01:15:00.147046 | orchestrator | ok: [testbed-node-1] 2026-03-08 01:15:00.147052 | orchestrator | ok: [testbed-node-2] 2026-03-08 01:15:00.147059 | orchestrator | 2026-03-08 01:15:00.147066 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-03-08 01:15:00.147072 | orchestrator | Sunday 08 March 2026 01:12:42 +0000 (0:00:00.376) 0:02:29.066 ********** 2026-03-08 01:15:00.147085 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-08 01:15:00.147098 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-08 01:15:00.147105 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-08 01:15:00.147113 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-08 01:15:00.147120 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-08 01:15:00.147131 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-08 01:15:00.147140 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-08 01:15:00.147149 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-08 01:15:00.147161 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-08 01:15:00.147168 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-08 01:15:00.147176 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-08 01:15:00.147186 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-08 01:15:00.147196 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:15:00.147203 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:15:00.147209 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:15:00.147216 | orchestrator | 2026-03-08 01:15:00.147222 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-03-08 01:15:00.147229 | orchestrator | Sunday 08 March 2026 01:12:44 +0000 (0:00:02.464) 0:02:31.531 ********** 2026-03-08 01:15:00.147235 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:15:00.147242 | orchestrator | 2026-03-08 01:15:00.147251 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-03-08 01:15:00.147257 | orchestrator | Sunday 08 March 2026 01:12:45 +0000 (0:00:00.150) 0:02:31.682 ********** 2026-03-08 01:15:00.147264 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:15:00.147270 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:15:00.147276 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:15:00.147282 | orchestrator | 2026-03-08 01:15:00.147289 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-03-08 01:15:00.147295 | orchestrator | Sunday 08 March 2026 01:12:45 +0000 (0:00:00.586) 0:02:32.268 ********** 2026-03-08 01:15:00.147301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-08 01:15:00.147312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-08 01:15:00.147320 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-08 01:15:00.147334 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-08 01:15:00.147341 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-08 01:15:00.147347 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:15:00.147358 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-08 01:15:00.147365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-08 01:15:00.147375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-08 01:15:00.147382 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-08 01:15:00.147391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-08 01:15:00.147398 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:15:00.147405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-08 01:15:00.147416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-08 01:15:00.147422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-08 01:15:00.147432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-08 01:15:00.147439 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-08 01:15:00.147445 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:15:00.147452 | orchestrator | 2026-03-08 01:15:00.147458 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-08 01:15:00.147464 | orchestrator | Sunday 08 March 2026 01:12:46 +0000 (0:00:00.751) 0:02:33.020 ********** 2026-03-08 01:15:00.147471 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-08 01:15:00.147477 | orchestrator | 2026-03-08 01:15:00.147484 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-03-08 01:15:00.147490 | orchestrator | Sunday 08 March 2026 01:12:46 +0000 (0:00:00.590) 0:02:33.610 ********** 2026-03-08 01:15:00.147499 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-08 01:15:00.147510 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-08 01:15:00.147520 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-08 01:15:00.147527 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-08 01:15:00.147534 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-08 01:15:00.147543 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-08 01:15:00.147549 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-08 01:15:00.147556 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-08 01:15:00.147566 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-08 01:15:00.147678 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-08 01:15:00.147688 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-08 01:15:00.147694 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-08 01:15:00.147704 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:15:00.147711 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:15:00.147722 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:15:00.147732 | orchestrator | 2026-03-08 01:15:00.147739 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-03-08 01:15:00.147746 | orchestrator | Sunday 08 March 2026 01:12:52 +0000 (0:00:05.216) 0:02:38.827 ********** 2026-03-08 01:15:00.147753 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-08 01:15:00.147759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-08 01:15:00.147766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-08 01:15:00.147775 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-08 01:15:00.147782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-08 01:15:00.147788 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:15:00.147798 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-08 01:15:00.147809 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-08 01:15:00.147815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-08 01:15:00.147822 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-08 01:15:00.147829 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-08 01:15:00.147835 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:15:00.147845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-08 01:15:00.147855 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-08 01:15:00.147865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-08 01:15:00.147872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-08 01:15:00.147878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-08 01:15:00.147884 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:15:00.147891 | orchestrator | 2026-03-08 01:15:00.147897 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-03-08 01:15:00.147904 | orchestrator | Sunday 08 March 2026 01:12:52 +0000 (0:00:00.780) 0:02:39.607 ********** 2026-03-08 01:15:00.147935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-08 01:15:00.147942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-08 01:15:00.147953 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-08 01:15:00.147963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-08 01:15:00.147970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-08 01:15:00.147976 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:15:00.148043 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-08 01:15:00.148057 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-08 01:15:00.148067 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-08 01:15:00.148078 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-08 01:15:00.148090 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-08 01:15:00.148098 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:15:00.148105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-08 01:15:00.148112 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-08 01:15:00.148118 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-08 01:15:00.148128 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-08 01:15:00.148139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-08 01:15:00.148145 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:15:00.148151 | orchestrator | 2026-03-08 01:15:00.148158 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-03-08 01:15:00.148164 | orchestrator | Sunday 08 March 2026 01:12:53 +0000 (0:00:00.963) 0:02:40.571 ********** 2026-03-08 01:15:00.148175 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-08 01:15:00.148182 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-08 01:15:00.148188 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-08 01:15:00.148201 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-08 01:15:00.148207 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-08 01:15:00.148213 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-08 01:15:00.148426 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-08 01:15:00.148441 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-08 01:15:00.148448 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-08 01:15:00.148455 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-08 01:15:00.148471 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-08 01:15:00.148478 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-08 01:15:00.148490 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:15:00.148497 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:15:00.148504 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:15:00.148510 | orchestrator | 2026-03-08 01:15:00.148517 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-03-08 01:15:00.148523 | orchestrator | Sunday 08 March 2026 01:12:58 +0000 (0:00:04.692) 0:02:45.263 ********** 2026-03-08 01:15:00.148529 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-08 01:15:00.148536 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-08 01:15:00.148542 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-08 01:15:00.148549 | orchestrator | 2026-03-08 01:15:00.148555 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-03-08 01:15:00.148561 | orchestrator | Sunday 08 March 2026 01:13:00 +0000 (0:00:01.932) 0:02:47.196 ********** 2026-03-08 01:15:00.148625 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-08 01:15:00.148636 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-08 01:15:00.148647 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-08 01:15:00.148652 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-08 01:15:00.148657 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-08 01:15:00.148661 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-08 01:15:00.148671 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-08 01:15:00.148675 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-08 01:15:00.148679 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-08 01:15:00.148685 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-08 01:15:00.148689 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-08 01:15:00.148693 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-08 01:15:00.148703 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:15:00.148714 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:15:00.148721 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:15:00.148728 | orchestrator | 2026-03-08 01:15:00.148734 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-03-08 01:15:00.148739 | orchestrator | Sunday 08 March 2026 01:13:19 +0000 (0:00:19.299) 0:03:06.496 ********** 2026-03-08 01:15:00.148745 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:15:00.148751 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:15:00.148757 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:15:00.148763 | orchestrator | 2026-03-08 01:15:00.148769 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-03-08 01:15:00.148774 | orchestrator | Sunday 08 March 2026 01:13:21 +0000 (0:00:01.588) 0:03:08.085 ********** 2026-03-08 01:15:00.148779 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-08 01:15:00.148786 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-08 01:15:00.148795 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-08 01:15:00.148802 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-08 01:15:00.148809 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-08 01:15:00.148815 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-08 01:15:00.148822 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-08 01:15:00.148828 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-08 01:15:00.148832 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-08 01:15:00.148836 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-08 01:15:00.148839 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-08 01:15:00.148843 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-08 01:15:00.148847 | orchestrator | 2026-03-08 01:15:00.148850 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-03-08 01:15:00.148854 | orchestrator | Sunday 08 March 2026 01:13:26 +0000 (0:00:05.299) 0:03:13.384 ********** 2026-03-08 01:15:00.148861 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-08 01:15:00.148865 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-08 01:15:00.148869 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-08 01:15:00.148873 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-08 01:15:00.148876 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-08 01:15:00.148880 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-08 01:15:00.148884 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-08 01:15:00.148887 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-08 01:15:00.148891 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-08 01:15:00.148895 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-08 01:15:00.148898 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-08 01:15:00.148902 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-08 01:15:00.148908 | orchestrator | 2026-03-08 01:15:00.148914 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-03-08 01:15:00.148921 | orchestrator | Sunday 08 March 2026 01:13:34 +0000 (0:00:07.394) 0:03:20.779 ********** 2026-03-08 01:15:00.148926 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-08 01:15:00.148933 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-08 01:15:00.148938 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-08 01:15:00.148945 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-08 01:15:00.148982 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-08 01:15:00.148990 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-08 01:15:00.148996 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-08 01:15:00.149003 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-08 01:15:00.149010 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-08 01:15:00.149016 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-08 01:15:00.149023 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-08 01:15:00.149030 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-08 01:15:00.149036 | orchestrator | 2026-03-08 01:15:00.149043 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2026-03-08 01:15:00.149053 | orchestrator | Sunday 08 March 2026 01:13:39 +0000 (0:00:05.148) 0:03:25.928 ********** 2026-03-08 01:15:00.149061 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-08 01:15:00.149073 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-08 01:15:00.149085 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-08 01:15:00.149093 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-08 01:15:00.149101 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-08 01:15:00.149112 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-08 01:15:00.149120 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-08 01:15:00.149131 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-08 01:15:00.149143 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-08 01:15:00.149151 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-08 01:15:00.149159 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-08 01:15:00.149167 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-08 01:15:00.149178 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:15:00.149186 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:15:00.149202 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-08 01:15:00.149209 | orchestrator | 2026-03-08 01:15:00.149216 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-08 01:15:00.149223 | orchestrator | Sunday 08 March 2026 01:13:42 +0000 (0:00:03.467) 0:03:29.395 ********** 2026-03-08 01:15:00.149230 | orchestrator | skipping: [testbed-node-0] 2026-03-08 01:15:00.149260 | orchestrator | skipping: [testbed-node-1] 2026-03-08 01:15:00.149269 | orchestrator | skipping: [testbed-node-2] 2026-03-08 01:15:00.149276 | orchestrator | 2026-03-08 01:15:00.149282 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-03-08 01:15:00.149289 | orchestrator | Sunday 08 March 2026 01:13:43 +0000 (0:00:00.324) 0:03:29.719 ********** 2026-03-08 01:15:00.149296 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:15:00.149300 | orchestrator | 2026-03-08 01:15:00.149305 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-03-08 01:15:00.149310 | orchestrator | Sunday 08 March 2026 01:13:45 +0000 (0:00:02.306) 0:03:32.026 ********** 2026-03-08 01:15:00.149314 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:15:00.149318 | orchestrator | 2026-03-08 01:15:00.149323 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-03-08 01:15:00.149327 | orchestrator | Sunday 08 March 2026 01:13:47 +0000 (0:00:02.047) 0:03:34.073 ********** 2026-03-08 01:15:00.149332 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:15:00.149336 | orchestrator | 2026-03-08 01:15:00.149341 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-03-08 01:15:00.149345 | orchestrator | Sunday 08 March 2026 01:13:49 +0000 (0:00:02.100) 0:03:36.174 ********** 2026-03-08 01:15:00.149350 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:15:00.149354 | orchestrator | 2026-03-08 01:15:00.149359 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-03-08 01:15:00.149363 | orchestrator | Sunday 08 March 2026 01:13:52 +0000 (0:00:03.012) 0:03:39.186 ********** 2026-03-08 01:15:00.149368 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:15:00.149372 | orchestrator | 2026-03-08 01:15:00.149377 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-08 01:15:00.149381 | orchestrator | Sunday 08 March 2026 01:14:16 +0000 (0:00:23.525) 0:04:02.712 ********** 2026-03-08 01:15:00.149385 | orchestrator | 2026-03-08 01:15:00.149390 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-08 01:15:00.149395 | orchestrator | Sunday 08 March 2026 01:14:16 +0000 (0:00:00.073) 0:04:02.785 ********** 2026-03-08 01:15:00.149399 | orchestrator | 2026-03-08 01:15:00.149404 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-08 01:15:00.149408 | orchestrator | Sunday 08 March 2026 01:14:16 +0000 (0:00:00.067) 0:04:02.853 ********** 2026-03-08 01:15:00.149412 | orchestrator | 2026-03-08 01:15:00.149417 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-03-08 01:15:00.149421 | orchestrator | Sunday 08 March 2026 01:14:16 +0000 (0:00:00.072) 0:04:02.926 ********** 2026-03-08 01:15:00.149425 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:15:00.149428 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:15:00.149432 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:15:00.149436 | orchestrator | 2026-03-08 01:15:00.149440 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-03-08 01:15:00.149443 | orchestrator | Sunday 08 March 2026 01:14:30 +0000 (0:00:14.625) 0:04:17.551 ********** 2026-03-08 01:15:00.149451 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:15:00.149454 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:15:00.149458 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:15:00.149462 | orchestrator | 2026-03-08 01:15:00.149466 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-03-08 01:15:00.149469 | orchestrator | Sunday 08 March 2026 01:14:37 +0000 (0:00:06.853) 0:04:24.405 ********** 2026-03-08 01:15:00.149476 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:15:00.149480 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:15:00.149483 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:15:00.149487 | orchestrator | 2026-03-08 01:15:00.149491 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-03-08 01:15:00.149495 | orchestrator | Sunday 08 March 2026 01:14:43 +0000 (0:00:05.603) 0:04:30.008 ********** 2026-03-08 01:15:00.149498 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:15:00.149502 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:15:00.149506 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:15:00.149509 | orchestrator | 2026-03-08 01:15:00.149513 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-03-08 01:15:00.149517 | orchestrator | Sunday 08 March 2026 01:14:48 +0000 (0:00:04.736) 0:04:34.745 ********** 2026-03-08 01:15:00.149520 | orchestrator | changed: [testbed-node-0] 2026-03-08 01:15:00.149524 | orchestrator | changed: [testbed-node-2] 2026-03-08 01:15:00.149528 | orchestrator | changed: [testbed-node-1] 2026-03-08 01:15:00.149531 | orchestrator | 2026-03-08 01:15:00.149535 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 01:15:00.149539 | orchestrator | testbed-node-0 : ok=57  changed=38  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-08 01:15:00.149543 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-08 01:15:00.149547 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-08 01:15:00.149551 | orchestrator | 2026-03-08 01:15:00.149554 | orchestrator | 2026-03-08 01:15:00.149558 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 01:15:00.149562 | orchestrator | Sunday 08 March 2026 01:14:58 +0000 (0:00:10.538) 0:04:45.283 ********** 2026-03-08 01:15:00.149569 | orchestrator | =============================================================================== 2026-03-08 01:15:00.149572 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 23.53s 2026-03-08 01:15:00.149576 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 19.30s 2026-03-08 01:15:00.149610 | orchestrator | octavia : Add rules for security groups -------------------------------- 15.91s 2026-03-08 01:15:00.149614 | orchestrator | octavia : Adding octavia related roles --------------------------------- 15.09s 2026-03-08 01:15:00.149618 | orchestrator | octavia : Restart octavia-api container -------------------------------- 14.63s 2026-03-08 01:15:00.149622 | orchestrator | octavia : Create security groups for octavia --------------------------- 10.78s 2026-03-08 01:15:00.149626 | orchestrator | octavia : Restart octavia-worker container ----------------------------- 10.54s 2026-03-08 01:15:00.149629 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.16s 2026-03-08 01:15:00.149633 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 7.39s 2026-03-08 01:15:00.149637 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.14s 2026-03-08 01:15:00.149640 | orchestrator | octavia : Get security groups for octavia ------------------------------- 7.07s 2026-03-08 01:15:00.149644 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.98s 2026-03-08 01:15:00.149648 | orchestrator | octavia : Restart octavia-driver-agent container ------------------------ 6.85s 2026-03-08 01:15:00.149655 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 6.12s 2026-03-08 01:15:00.149659 | orchestrator | octavia : Create loadbalancer management network ------------------------ 5.76s 2026-03-08 01:15:00.149663 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 5.74s 2026-03-08 01:15:00.149666 | orchestrator | octavia : Restart octavia-health-manager container ---------------------- 5.60s 2026-03-08 01:15:00.149670 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 5.30s 2026-03-08 01:15:00.149674 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 5.22s 2026-03-08 01:15:00.149678 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 5.15s 2026-03-08 01:15:00.149681 | orchestrator | 2026-03-08 01:15:00 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-08 01:15:03.188670 | orchestrator | 2026-03-08 01:15:03 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-08 01:15:06.233236 | orchestrator | 2026-03-08 01:15:06 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-08 01:15:09.269909 | orchestrator | 2026-03-08 01:15:09 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-08 01:15:12.309762 | orchestrator | 2026-03-08 01:15:12 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-08 01:15:15.344709 | orchestrator | 2026-03-08 01:15:15 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-08 01:15:18.387061 | orchestrator | 2026-03-08 01:15:18 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-08 01:15:21.424236 | orchestrator | 2026-03-08 01:15:21 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-08 01:15:24.468000 | orchestrator | 2026-03-08 01:15:24 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-08 01:15:27.516566 | orchestrator | 2026-03-08 01:15:27 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-08 01:15:30.551243 | orchestrator | 2026-03-08 01:15:30 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-08 01:15:33.591374 | orchestrator | 2026-03-08 01:15:33 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-08 01:15:36.636156 | orchestrator | 2026-03-08 01:15:36 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-08 01:15:39.681301 | orchestrator | 2026-03-08 01:15:39 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-08 01:15:42.729496 | orchestrator | 2026-03-08 01:15:42 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-08 01:15:45.778363 | orchestrator | 2026-03-08 01:15:45 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-08 01:15:48.823635 | orchestrator | 2026-03-08 01:15:48 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-08 01:15:51.867768 | orchestrator | 2026-03-08 01:15:51 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-08 01:15:54.909132 | orchestrator | 2026-03-08 01:15:54 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-08 01:15:57.954477 | orchestrator | 2026-03-08 01:15:57 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-08 01:16:00.993204 | orchestrator | 2026-03-08 01:16:01.328787 | orchestrator | 2026-03-08 01:16:01.333909 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Sun Mar 8 01:16:01 UTC 2026 2026-03-08 01:16:01.333986 | orchestrator | 2026-03-08 01:16:01.783601 | orchestrator | ok: Runtime: 0:36:22.365657 2026-03-08 01:16:02.082140 | 2026-03-08 01:16:02.082308 | TASK [Bootstrap services] 2026-03-08 01:16:02.895151 | orchestrator | 2026-03-08 01:16:02.895290 | orchestrator | # BOOTSTRAP 2026-03-08 01:16:02.895301 | orchestrator | 2026-03-08 01:16:02.895306 | orchestrator | + set -e 2026-03-08 01:16:02.895311 | orchestrator | + echo 2026-03-08 01:16:02.895317 | orchestrator | + echo '# BOOTSTRAP' 2026-03-08 01:16:02.895325 | orchestrator | + echo 2026-03-08 01:16:02.895348 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-03-08 01:16:02.904313 | orchestrator | + set -e 2026-03-08 01:16:02.904387 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-03-08 01:16:07.973991 | orchestrator | 2026-03-08 01:16:07 | INFO  | It takes a moment until task 8f5ce403-0d03-4e85-8018-86df646360c7 (flavor-manager) has been started and output is visible here. 2026-03-08 01:16:16.907747 | orchestrator | 2026-03-08 01:16:12 | INFO  | Flavor SCS-1L-1 created 2026-03-08 01:16:16.907856 | orchestrator | 2026-03-08 01:16:12 | INFO  | Flavor SCS-1L-1-5 created 2026-03-08 01:16:16.907880 | orchestrator | 2026-03-08 01:16:12 | INFO  | Flavor SCS-1V-2 created 2026-03-08 01:16:16.907895 | orchestrator | 2026-03-08 01:16:12 | INFO  | Flavor SCS-1V-2-5 created 2026-03-08 01:16:16.907911 | orchestrator | 2026-03-08 01:16:12 | INFO  | Flavor SCS-1V-4 created 2026-03-08 01:16:16.907926 | orchestrator | 2026-03-08 01:16:13 | INFO  | Flavor SCS-1V-4-10 created 2026-03-08 01:16:16.907938 | orchestrator | 2026-03-08 01:16:13 | INFO  | Flavor SCS-1V-8 created 2026-03-08 01:16:16.907954 | orchestrator | 2026-03-08 01:16:13 | INFO  | Flavor SCS-1V-8-20 created 2026-03-08 01:16:16.907981 | orchestrator | 2026-03-08 01:16:13 | INFO  | Flavor SCS-2V-4 created 2026-03-08 01:16:16.907996 | orchestrator | 2026-03-08 01:16:13 | INFO  | Flavor SCS-2V-4-10 created 2026-03-08 01:16:16.908011 | orchestrator | 2026-03-08 01:16:13 | INFO  | Flavor SCS-2V-8 created 2026-03-08 01:16:16.908025 | orchestrator | 2026-03-08 01:16:13 | INFO  | Flavor SCS-2V-8-20 created 2026-03-08 01:16:16.908038 | orchestrator | 2026-03-08 01:16:14 | INFO  | Flavor SCS-2V-16 created 2026-03-08 01:16:16.908050 | orchestrator | 2026-03-08 01:16:14 | INFO  | Flavor SCS-2V-16-50 created 2026-03-08 01:16:16.908060 | orchestrator | 2026-03-08 01:16:14 | INFO  | Flavor SCS-4V-8 created 2026-03-08 01:16:16.908074 | orchestrator | 2026-03-08 01:16:14 | INFO  | Flavor SCS-4V-8-20 created 2026-03-08 01:16:16.908088 | orchestrator | 2026-03-08 01:16:14 | INFO  | Flavor SCS-4V-16 created 2026-03-08 01:16:16.908101 | orchestrator | 2026-03-08 01:16:15 | INFO  | Flavor SCS-4V-16-50 created 2026-03-08 01:16:16.908115 | orchestrator | 2026-03-08 01:16:15 | INFO  | Flavor SCS-4V-32 created 2026-03-08 01:16:16.908128 | orchestrator | 2026-03-08 01:16:15 | INFO  | Flavor SCS-4V-32-100 created 2026-03-08 01:16:16.908140 | orchestrator | 2026-03-08 01:16:15 | INFO  | Flavor SCS-8V-16 created 2026-03-08 01:16:16.908152 | orchestrator | 2026-03-08 01:16:15 | INFO  | Flavor SCS-8V-16-50 created 2026-03-08 01:16:16.908167 | orchestrator | 2026-03-08 01:16:15 | INFO  | Flavor SCS-8V-32 created 2026-03-08 01:16:16.908181 | orchestrator | 2026-03-08 01:16:15 | INFO  | Flavor SCS-8V-32-100 created 2026-03-08 01:16:16.908195 | orchestrator | 2026-03-08 01:16:16 | INFO  | Flavor SCS-16V-32 created 2026-03-08 01:16:16.908208 | orchestrator | 2026-03-08 01:16:16 | INFO  | Flavor SCS-16V-32-100 created 2026-03-08 01:16:16.908222 | orchestrator | 2026-03-08 01:16:16 | INFO  | Flavor SCS-2V-4-20s created 2026-03-08 01:16:16.908236 | orchestrator | 2026-03-08 01:16:16 | INFO  | Flavor SCS-4V-8-50s created 2026-03-08 01:16:16.908250 | orchestrator | 2026-03-08 01:16:16 | INFO  | Flavor SCS-4V-16-100s created 2026-03-08 01:16:16.908264 | orchestrator | 2026-03-08 01:16:16 | INFO  | Flavor SCS-8V-32-100s created 2026-03-08 01:16:19.513985 | orchestrator | 2026-03-08 01:16:19 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-03-08 01:16:29.605348 | orchestrator | 2026-03-08 01:16:29 | INFO  | Prepare task for execution of bootstrap-basic. 2026-03-08 01:16:29.685056 | orchestrator | 2026-03-08 01:16:29 | INFO  | Task c48fd485-5730-4581-a08a-2612120289e9 (bootstrap-basic) was prepared for execution. 2026-03-08 01:16:29.685139 | orchestrator | 2026-03-08 01:16:29 | INFO  | It takes a moment until task c48fd485-5730-4581-a08a-2612120289e9 (bootstrap-basic) has been started and output is visible here. 2026-03-08 01:17:18.777571 | orchestrator | 2026-03-08 01:17:18.777664 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-03-08 01:17:18.777672 | orchestrator | 2026-03-08 01:17:18.777677 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-08 01:17:18.777681 | orchestrator | Sunday 08 March 2026 01:16:34 +0000 (0:00:00.072) 0:00:00.072 ********** 2026-03-08 01:17:18.777686 | orchestrator | ok: [localhost] 2026-03-08 01:17:18.777691 | orchestrator | 2026-03-08 01:17:18.777695 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-03-08 01:17:18.777699 | orchestrator | Sunday 08 March 2026 01:16:36 +0000 (0:00:02.062) 0:00:02.135 ********** 2026-03-08 01:17:18.777705 | orchestrator | ok: [localhost] 2026-03-08 01:17:18.777708 | orchestrator | 2026-03-08 01:17:18.777713 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-03-08 01:17:18.777716 | orchestrator | Sunday 08 March 2026 01:16:46 +0000 (0:00:10.486) 0:00:12.622 ********** 2026-03-08 01:17:18.777720 | orchestrator | changed: [localhost] 2026-03-08 01:17:18.777725 | orchestrator | 2026-03-08 01:17:18.777729 | orchestrator | TASK [Create public network] *************************************************** 2026-03-08 01:17:18.777733 | orchestrator | Sunday 08 March 2026 01:16:53 +0000 (0:00:07.168) 0:00:19.790 ********** 2026-03-08 01:17:18.777737 | orchestrator | changed: [localhost] 2026-03-08 01:17:18.777741 | orchestrator | 2026-03-08 01:17:18.777752 | orchestrator | TASK [Set public network to default] ******************************************* 2026-03-08 01:17:18.777758 | orchestrator | Sunday 08 March 2026 01:16:59 +0000 (0:00:05.518) 0:00:25.309 ********** 2026-03-08 01:17:18.777764 | orchestrator | changed: [localhost] 2026-03-08 01:17:18.777770 | orchestrator | 2026-03-08 01:17:18.777775 | orchestrator | TASK [Create public subnet] **************************************************** 2026-03-08 01:17:18.777782 | orchestrator | Sunday 08 March 2026 01:17:06 +0000 (0:00:06.691) 0:00:32.001 ********** 2026-03-08 01:17:18.777788 | orchestrator | changed: [localhost] 2026-03-08 01:17:18.777794 | orchestrator | 2026-03-08 01:17:18.777800 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-03-08 01:17:18.777890 | orchestrator | Sunday 08 March 2026 01:17:10 +0000 (0:00:04.648) 0:00:36.650 ********** 2026-03-08 01:17:18.777895 | orchestrator | changed: [localhost] 2026-03-08 01:17:18.777898 | orchestrator | 2026-03-08 01:17:18.777902 | orchestrator | TASK [Create manager role] ***************************************************** 2026-03-08 01:17:18.777914 | orchestrator | Sunday 08 March 2026 01:17:14 +0000 (0:00:03.882) 0:00:40.533 ********** 2026-03-08 01:17:18.777918 | orchestrator | ok: [localhost] 2026-03-08 01:17:18.777922 | orchestrator | 2026-03-08 01:17:18.777926 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-08 01:17:18.777930 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-08 01:17:18.777935 | orchestrator | 2026-03-08 01:17:18.777939 | orchestrator | 2026-03-08 01:17:18.777943 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-08 01:17:18.777947 | orchestrator | Sunday 08 March 2026 01:17:18 +0000 (0:00:03.884) 0:00:44.417 ********** 2026-03-08 01:17:18.777951 | orchestrator | =============================================================================== 2026-03-08 01:17:18.777954 | orchestrator | Get volume type LUKS --------------------------------------------------- 10.49s 2026-03-08 01:17:18.777970 | orchestrator | Create volume type LUKS ------------------------------------------------- 7.17s 2026-03-08 01:17:18.777974 | orchestrator | Set public network to default ------------------------------------------- 6.69s 2026-03-08 01:17:18.777978 | orchestrator | Create public network --------------------------------------------------- 5.52s 2026-03-08 01:17:18.777982 | orchestrator | Create public subnet ---------------------------------------------------- 4.65s 2026-03-08 01:17:18.777986 | orchestrator | Create manager role ----------------------------------------------------- 3.88s 2026-03-08 01:17:18.777990 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.88s 2026-03-08 01:17:18.777994 | orchestrator | Gathering Facts --------------------------------------------------------- 2.06s 2026-03-08 01:17:21.438234 | orchestrator | 2026-03-08 01:17:21 | INFO  | It takes a moment until task 3c5685a5-4014-49ba-a1ec-ac1306b92abe (image-manager) has been started and output is visible here. 2026-03-08 01:18:02.233801 | orchestrator | 2026-03-08 01:17:24 | INFO  | Processing image 'Cirros 0.6.2' 2026-03-08 01:18:02.233853 | orchestrator | 2026-03-08 01:17:24 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2026-03-08 01:18:02.233859 | orchestrator | 2026-03-08 01:17:24 | INFO  | Importing image Cirros 0.6.2 2026-03-08 01:18:02.233900 | orchestrator | 2026-03-08 01:17:24 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-03-08 01:18:02.233906 | orchestrator | 2026-03-08 01:17:26 | INFO  | Waiting for image to leave queued state... 2026-03-08 01:18:02.233910 | orchestrator | 2026-03-08 01:17:28 | INFO  | Waiting for import to complete... 2026-03-08 01:18:02.233914 | orchestrator | 2026-03-08 01:17:38 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2026-03-08 01:18:02.233918 | orchestrator | 2026-03-08 01:17:39 | INFO  | Checking parameters of 'Cirros 0.6.2' 2026-03-08 01:18:02.233922 | orchestrator | 2026-03-08 01:17:39 | INFO  | Setting internal_version = 0.6.2 2026-03-08 01:18:02.233926 | orchestrator | 2026-03-08 01:17:39 | INFO  | Setting image_original_user = cirros 2026-03-08 01:18:02.233930 | orchestrator | 2026-03-08 01:17:39 | INFO  | Adding tag os:cirros 2026-03-08 01:18:02.233934 | orchestrator | 2026-03-08 01:17:39 | INFO  | Setting property architecture: x86_64 2026-03-08 01:18:02.233937 | orchestrator | 2026-03-08 01:17:39 | INFO  | Setting property hw_disk_bus: scsi 2026-03-08 01:18:02.233941 | orchestrator | 2026-03-08 01:17:39 | INFO  | Setting property hw_rng_model: virtio 2026-03-08 01:18:02.233945 | orchestrator | 2026-03-08 01:17:39 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-03-08 01:18:02.233948 | orchestrator | 2026-03-08 01:17:40 | INFO  | Setting property hw_watchdog_action: reset 2026-03-08 01:18:02.233952 | orchestrator | 2026-03-08 01:17:40 | INFO  | Setting property hypervisor_type: qemu 2026-03-08 01:18:02.233959 | orchestrator | 2026-03-08 01:17:40 | INFO  | Setting property os_distro: cirros 2026-03-08 01:18:02.233963 | orchestrator | 2026-03-08 01:17:40 | INFO  | Setting property os_purpose: minimal 2026-03-08 01:18:02.233967 | orchestrator | 2026-03-08 01:17:40 | INFO  | Setting property replace_frequency: never 2026-03-08 01:18:02.233971 | orchestrator | 2026-03-08 01:17:40 | INFO  | Setting property uuid_validity: none 2026-03-08 01:18:02.233974 | orchestrator | 2026-03-08 01:17:41 | INFO  | Setting property provided_until: none 2026-03-08 01:18:02.233978 | orchestrator | 2026-03-08 01:17:41 | INFO  | Setting property image_description: Cirros 2026-03-08 01:18:02.233982 | orchestrator | 2026-03-08 01:17:41 | INFO  | Setting property image_name: Cirros 2026-03-08 01:18:02.233995 | orchestrator | 2026-03-08 01:17:41 | INFO  | Setting property internal_version: 0.6.2 2026-03-08 01:18:02.233999 | orchestrator | 2026-03-08 01:17:41 | INFO  | Setting property image_original_user: cirros 2026-03-08 01:18:02.234002 | orchestrator | 2026-03-08 01:17:42 | INFO  | Setting property os_version: 0.6.2 2026-03-08 01:18:02.234006 | orchestrator | 2026-03-08 01:17:42 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-03-08 01:18:02.234041 | orchestrator | 2026-03-08 01:17:42 | INFO  | Setting property image_build_date: 2023-05-30 2026-03-08 01:18:02.234050 | orchestrator | 2026-03-08 01:17:42 | INFO  | Checking status of 'Cirros 0.6.2' 2026-03-08 01:18:02.234057 | orchestrator | 2026-03-08 01:17:42 | INFO  | Checking visibility of 'Cirros 0.6.2' 2026-03-08 01:18:02.234067 | orchestrator | 2026-03-08 01:17:42 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2026-03-08 01:18:02.234074 | orchestrator | 2026-03-08 01:17:43 | INFO  | Processing image 'Cirros 0.6.3' 2026-03-08 01:18:02.234081 | orchestrator | 2026-03-08 01:17:43 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2026-03-08 01:18:02.234088 | orchestrator | 2026-03-08 01:17:43 | INFO  | Importing image Cirros 0.6.3 2026-03-08 01:18:02.234093 | orchestrator | 2026-03-08 01:17:43 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-03-08 01:18:02.234096 | orchestrator | 2026-03-08 01:17:44 | INFO  | Waiting for image to leave queued state... 2026-03-08 01:18:02.234100 | orchestrator | 2026-03-08 01:17:46 | INFO  | Waiting for import to complete... 2026-03-08 01:18:02.234114 | orchestrator | 2026-03-08 01:17:57 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2026-03-08 01:18:02.234118 | orchestrator | 2026-03-08 01:17:57 | INFO  | Checking parameters of 'Cirros 0.6.3' 2026-03-08 01:18:02.234122 | orchestrator | 2026-03-08 01:17:57 | INFO  | Setting internal_version = 0.6.3 2026-03-08 01:18:02.234126 | orchestrator | 2026-03-08 01:17:57 | INFO  | Setting image_original_user = cirros 2026-03-08 01:18:02.234129 | orchestrator | 2026-03-08 01:17:57 | INFO  | Adding tag os:cirros 2026-03-08 01:18:02.234133 | orchestrator | 2026-03-08 01:17:57 | INFO  | Setting property architecture: x86_64 2026-03-08 01:18:02.234137 | orchestrator | 2026-03-08 01:17:58 | INFO  | Setting property hw_disk_bus: scsi 2026-03-08 01:18:02.234140 | orchestrator | 2026-03-08 01:17:58 | INFO  | Setting property hw_rng_model: virtio 2026-03-08 01:18:02.234144 | orchestrator | 2026-03-08 01:17:58 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-03-08 01:18:02.234148 | orchestrator | 2026-03-08 01:17:58 | INFO  | Setting property hw_watchdog_action: reset 2026-03-08 01:18:02.234152 | orchestrator | 2026-03-08 01:17:58 | INFO  | Setting property hypervisor_type: qemu 2026-03-08 01:18:02.234155 | orchestrator | 2026-03-08 01:17:59 | INFO  | Setting property os_distro: cirros 2026-03-08 01:18:02.234159 | orchestrator | 2026-03-08 01:17:59 | INFO  | Setting property os_purpose: minimal 2026-03-08 01:18:02.234168 | orchestrator | 2026-03-08 01:17:59 | INFO  | Setting property replace_frequency: never 2026-03-08 01:18:02.234177 | orchestrator | 2026-03-08 01:17:59 | INFO  | Setting property uuid_validity: none 2026-03-08 01:18:02.234180 | orchestrator | 2026-03-08 01:17:59 | INFO  | Setting property provided_until: none 2026-03-08 01:18:02.234184 | orchestrator | 2026-03-08 01:18:00 | INFO  | Setting property image_description: Cirros 2026-03-08 01:18:02.234192 | orchestrator | 2026-03-08 01:18:00 | INFO  | Setting property image_name: Cirros 2026-03-08 01:18:02.234195 | orchestrator | 2026-03-08 01:18:00 | INFO  | Setting property internal_version: 0.6.3 2026-03-08 01:18:02.234199 | orchestrator | 2026-03-08 01:18:00 | INFO  | Setting property image_original_user: cirros 2026-03-08 01:18:02.234203 | orchestrator | 2026-03-08 01:18:00 | INFO  | Setting property os_version: 0.6.3 2026-03-08 01:18:02.234207 | orchestrator | 2026-03-08 01:18:01 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-03-08 01:18:02.234211 | orchestrator | 2026-03-08 01:18:01 | INFO  | Setting property image_build_date: 2024-09-26 2026-03-08 01:18:02.234215 | orchestrator | 2026-03-08 01:18:01 | INFO  | Checking status of 'Cirros 0.6.3' 2026-03-08 01:18:02.234218 | orchestrator | 2026-03-08 01:18:01 | INFO  | Checking visibility of 'Cirros 0.6.3' 2026-03-08 01:18:02.234222 | orchestrator | 2026-03-08 01:18:01 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2026-03-08 01:18:02.561213 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2026-03-08 01:18:05.025432 | orchestrator | 2026-03-08 01:18:05 | INFO  | date: 2026-03-07 2026-03-08 01:18:05.025491 | orchestrator | 2026-03-08 01:18:05 | INFO  | image: octavia-amphora-haproxy-2024.2.20260307.qcow2 2026-03-08 01:18:05.025512 | orchestrator | 2026-03-08 01:18:05 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260307.qcow2 2026-03-08 01:18:05.025521 | orchestrator | 2026-03-08 01:18:05 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260307.qcow2.CHECKSUM 2026-03-08 01:18:05.123834 | orchestrator | 2026-03-08 01:18:05 | INFO  | checksum: localhost | ok: "/var/lib/zuul/builds/628392d9df5a4e3bac28b23c6f85c4d8/work/logs" 2026-03-08 01:18:37.740727 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/628392d9df5a4e3bac28b23c6f85c4d8/work/artifacts" 2026-03-08 01:18:37.998461 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/628392d9df5a4e3bac28b23c6f85c4d8/work/docs" 2026-03-08 01:18:38.023377 | 2026-03-08 01:18:38.023542 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-03-08 01:18:38.958120 | orchestrator | changed: .d..t...... ./ 2026-03-08 01:18:38.958415 | orchestrator | changed: All items complete 2026-03-08 01:18:38.958455 | 2026-03-08 01:18:39.705329 | orchestrator | changed: .d..t...... ./ 2026-03-08 01:18:40.433381 | orchestrator | changed: .d..t...... ./ 2026-03-08 01:18:40.448717 | 2026-03-08 01:18:40.448856 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-03-08 01:18:40.476004 | orchestrator | skipping: Conditional result was False 2026-03-08 01:18:40.479979 | orchestrator | skipping: Conditional result was False 2026-03-08 01:18:40.490912 | 2026-03-08 01:18:40.490996 | PLAY RECAP 2026-03-08 01:18:40.491047 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-03-08 01:18:40.491075 | 2026-03-08 01:18:40.620993 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-03-08 01:18:40.623661 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-03-08 01:18:41.415283 | 2026-03-08 01:18:41.415465 | PLAY [Base post] 2026-03-08 01:18:41.431162 | 2026-03-08 01:18:41.431364 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-03-08 01:18:42.960030 | orchestrator | changed 2026-03-08 01:18:42.969648 | 2026-03-08 01:18:42.969780 | PLAY RECAP 2026-03-08 01:18:42.969856 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-03-08 01:18:42.969930 | 2026-03-08 01:18:43.089819 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-03-08 01:18:43.092318 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-03-08 01:18:43.900274 | 2026-03-08 01:18:43.900460 | PLAY [Base post-logs] 2026-03-08 01:18:43.911526 | 2026-03-08 01:18:43.911685 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-03-08 01:18:44.396662 | localhost | changed 2026-03-08 01:18:44.412462 | 2026-03-08 01:18:44.412742 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-03-08 01:18:44.451612 | localhost | ok 2026-03-08 01:18:44.458683 | 2026-03-08 01:18:44.458901 | TASK [Set zuul-log-path fact] 2026-03-08 01:18:44.487201 | localhost | ok 2026-03-08 01:18:44.501207 | 2026-03-08 01:18:44.501365 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-03-08 01:18:44.527969 | localhost | ok 2026-03-08 01:18:44.531658 | 2026-03-08 01:18:44.531765 | TASK [upload-logs : Create log directories] 2026-03-08 01:18:45.048156 | localhost | changed 2026-03-08 01:18:45.052783 | 2026-03-08 01:18:45.052933 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-03-08 01:18:45.586294 | localhost -> localhost | ok: Runtime: 0:00:00.008048 2026-03-08 01:18:45.590611 | 2026-03-08 01:18:45.590741 | TASK [upload-logs : Upload logs to log server] 2026-03-08 01:18:46.144521 | localhost | Output suppressed because no_log was given 2026-03-08 01:18:46.146361 | 2026-03-08 01:18:46.146468 | LOOP [upload-logs : Compress console log and json output] 2026-03-08 01:18:46.191834 | localhost | skipping: Conditional result was False 2026-03-08 01:18:46.197896 | localhost | skipping: Conditional result was False 2026-03-08 01:18:46.202910 | 2026-03-08 01:18:46.203130 | LOOP [upload-logs : Upload compressed console log and json output] 2026-03-08 01:18:46.250572 | localhost | skipping: Conditional result was False 2026-03-08 01:18:46.251199 | 2026-03-08 01:18:46.254435 | localhost | skipping: Conditional result was False 2026-03-08 01:18:46.265729 | 2026-03-08 01:18:46.265896 | LOOP [upload-logs : Upload console log and json output]